00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 604 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3269 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.080 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.109 Using shallow fetch with depth 1 00:00:00.109 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.109 > git --version # timeout=10 00:00:00.145 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.179 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.179 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.971 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.982 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.995 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.995 > git config core.sparsecheckout # timeout=10 00:00:04.005 > git read-tree -mu HEAD # timeout=10 00:00:04.022 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.040 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.040 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.125 [Pipeline] Start of Pipeline 00:00:04.139 [Pipeline] library 00:00:04.141 Loading library shm_lib@master 00:00:04.141 Library shm_lib@master is cached. Copying from home. 00:00:04.156 [Pipeline] node 00:00:04.173 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.175 [Pipeline] { 00:00:04.185 [Pipeline] catchError 00:00:04.187 [Pipeline] { 00:00:04.199 [Pipeline] wrap 00:00:04.208 [Pipeline] { 00:00:04.214 [Pipeline] stage 00:00:04.215 [Pipeline] { (Prologue) 00:00:04.232 [Pipeline] echo 00:00:04.233 Node: VM-host-SM0 00:00:04.238 [Pipeline] cleanWs 00:00:04.248 [WS-CLEANUP] Deleting project workspace... 00:00:04.248 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.255 [WS-CLEANUP] done 00:00:04.434 [Pipeline] setCustomBuildProperty 00:00:04.512 [Pipeline] httpRequest 00:00:04.530 [Pipeline] echo 00:00:04.531 Sorcerer 10.211.164.101 is alive 00:00:04.536 [Pipeline] httpRequest 00:00:04.540 HttpMethod: GET 00:00:04.541 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.541 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.552 Response Code: HTTP/1.1 200 OK 00:00:04.553 Success: Status code 200 is in the accepted range: 200,404 00:00:04.553 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.203 [Pipeline] sh 00:00:06.485 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.505 [Pipeline] httpRequest 00:00:06.537 [Pipeline] echo 00:00:06.539 Sorcerer 10.211.164.101 is alive 00:00:06.545 [Pipeline] httpRequest 00:00:06.549 HttpMethod: GET 00:00:06.549 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:06.550 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:06.563 Response Code: HTTP/1.1 200 OK 00:00:06.563 Success: Status code 200 is in the accepted range: 200,404 00:00:06.564 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:28.011 [Pipeline] sh 00:00:28.292 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:31.589 [Pipeline] sh 00:00:31.870 + git -C spdk log --oneline -n5 00:00:31.870 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:31.870 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:31.870 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:31.870 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:31.870 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:31.893 [Pipeline] withCredentials 00:00:31.904 > git --version # timeout=10 00:00:31.918 > git --version # 'git version 2.39.2' 00:00:31.932 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.934 [Pipeline] { 00:00:31.945 [Pipeline] retry 00:00:31.947 [Pipeline] { 00:00:31.965 [Pipeline] sh 00:00:32.245 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:32.257 [Pipeline] } 00:00:32.280 [Pipeline] // retry 00:00:32.286 [Pipeline] } 00:00:32.308 [Pipeline] // withCredentials 00:00:32.319 [Pipeline] httpRequest 00:00:32.341 [Pipeline] echo 00:00:32.342 Sorcerer 10.211.164.101 is alive 00:00:32.351 [Pipeline] httpRequest 00:00:32.356 HttpMethod: GET 00:00:32.356 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:32.357 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:32.367 Response Code: HTTP/1.1 200 OK 00:00:32.368 Success: Status code 200 is in the accepted range: 200,404 00:00:32.368 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:12.663 [Pipeline] sh 00:01:12.942 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:14.854 [Pipeline] sh 00:01:15.133 + git -C dpdk log --oneline -n5 00:01:15.133 caf0f5d395 version: 22.11.4 00:01:15.133 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:15.133 dc9c799c7d vhost: fix missing spinlock unlock 00:01:15.133 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:15.133 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:15.153 [Pipeline] writeFile 00:01:15.171 [Pipeline] sh 00:01:15.450 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.463 [Pipeline] sh 00:01:15.746 + cat autorun-spdk.conf 00:01:15.747 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.747 SPDK_TEST_NVMF=1 00:01:15.747 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.747 SPDK_TEST_USDT=1 00:01:15.747 SPDK_RUN_UBSAN=1 00:01:15.747 SPDK_TEST_NVMF_MDNS=1 00:01:15.747 NET_TYPE=virt 00:01:15.747 SPDK_JSONRPC_GO_CLIENT=1 00:01:15.747 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:15.747 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:15.747 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.753 RUN_NIGHTLY=1 00:01:15.759 [Pipeline] } 00:01:15.781 [Pipeline] // stage 00:01:15.802 [Pipeline] stage 00:01:15.805 [Pipeline] { (Run VM) 00:01:15.823 [Pipeline] sh 00:01:16.107 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.107 + echo 'Start stage prepare_nvme.sh' 00:01:16.107 Start stage prepare_nvme.sh 00:01:16.107 + [[ -n 3 ]] 00:01:16.107 + disk_prefix=ex3 00:01:16.107 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:16.107 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:16.107 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:16.107 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.107 ++ SPDK_TEST_NVMF=1 00:01:16.107 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.107 ++ SPDK_TEST_USDT=1 00:01:16.107 ++ SPDK_RUN_UBSAN=1 00:01:16.107 ++ SPDK_TEST_NVMF_MDNS=1 00:01:16.107 ++ NET_TYPE=virt 00:01:16.107 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:16.107 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:16.107 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.107 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.107 ++ RUN_NIGHTLY=1 00:01:16.107 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:16.107 + nvme_files=() 00:01:16.107 + declare -A nvme_files 00:01:16.107 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.107 + nvme_files['nvme.img']=5G 00:01:16.107 + nvme_files['nvme-cmb.img']=5G 00:01:16.107 + nvme_files['nvme-multi0.img']=4G 00:01:16.107 + nvme_files['nvme-multi1.img']=4G 00:01:16.107 + nvme_files['nvme-multi2.img']=4G 00:01:16.107 + nvme_files['nvme-openstack.img']=8G 00:01:16.107 + nvme_files['nvme-zns.img']=5G 00:01:16.107 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.107 + (( SPDK_TEST_FTL == 1 )) 00:01:16.107 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.107 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:16.107 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.107 + for nvme in "${!nvme_files[@]}" 00:01:16.107 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:16.365 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.365 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:16.365 + echo 'End stage prepare_nvme.sh' 00:01:16.365 End stage prepare_nvme.sh 00:01:16.380 [Pipeline] sh 00:01:16.661 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.662 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:16.662 00:01:16.662 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:16.662 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:16.662 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:16.662 HELP=0 00:01:16.662 DRY_RUN=0 00:01:16.662 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:16.662 NVME_DISKS_TYPE=nvme,nvme, 00:01:16.662 NVME_AUTO_CREATE=0 00:01:16.662 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:16.662 NVME_CMB=,, 00:01:16.662 NVME_PMR=,, 00:01:16.662 NVME_ZNS=,, 00:01:16.662 NVME_MS=,, 00:01:16.662 NVME_FDP=,, 00:01:16.662 SPDK_VAGRANT_DISTRO=fedora38 00:01:16.662 SPDK_VAGRANT_VMCPU=10 00:01:16.662 SPDK_VAGRANT_VMRAM=12288 00:01:16.662 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.662 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.662 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.662 SPDK_OPENSTACK_NETWORK=0 00:01:16.662 VAGRANT_PACKAGE_BOX=0 00:01:16.662 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:16.662 FORCE_DISTRO=true 00:01:16.662 VAGRANT_BOX_VERSION= 00:01:16.662 EXTRA_VAGRANTFILES= 00:01:16.662 NIC_MODEL=e1000 00:01:16.662 00:01:16.662 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:16.662 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:19.940 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.198 ==> default: Creating image (snapshot of base box volume). 00:01:20.198 ==> default: Creating domain with the following settings... 00:01:20.198 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721008999_71079c5e9b0c776d3b57 00:01:20.198 ==> default: -- Domain type: kvm 00:01:20.198 ==> default: -- Cpus: 10 00:01:20.198 ==> default: -- Feature: acpi 00:01:20.198 ==> default: -- Feature: apic 00:01:20.198 ==> default: -- Feature: pae 00:01:20.198 ==> default: -- Memory: 12288M 00:01:20.198 ==> default: -- Memory Backing: hugepages: 00:01:20.198 ==> default: -- Management MAC: 00:01:20.198 ==> default: -- Loader: 00:01:20.198 ==> default: -- Nvram: 00:01:20.198 ==> default: -- Base box: spdk/fedora38 00:01:20.198 ==> default: -- Storage pool: default 00:01:20.198 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721008999_71079c5e9b0c776d3b57.img (20G) 00:01:20.198 ==> default: -- Volume Cache: default 00:01:20.198 ==> default: -- Kernel: 00:01:20.198 ==> default: -- Initrd: 00:01:20.198 ==> default: -- Graphics Type: vnc 00:01:20.198 ==> default: -- Graphics Port: -1 00:01:20.198 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.198 ==> default: -- Graphics Password: Not defined 00:01:20.198 ==> default: -- Video Type: cirrus 00:01:20.198 ==> default: -- Video VRAM: 9216 00:01:20.198 ==> default: -- Sound Type: 00:01:20.198 ==> default: -- Keymap: en-us 00:01:20.198 ==> default: -- TPM Path: 00:01:20.198 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.198 ==> default: -- Command line args: 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:20.198 ==> default: -> value=-drive, 00:01:20.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:20.198 ==> default: -> value=-drive, 00:01:20.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.198 ==> default: -> value=-drive, 00:01:20.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.198 ==> default: -> value=-drive, 00:01:20.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:20.198 ==> default: -> value=-device, 00:01:20.198 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.456 ==> default: Creating shared folders metadata... 00:01:20.456 ==> default: Starting domain. 00:01:22.374 ==> default: Waiting for domain to get an IP address... 00:01:37.293 ==> default: Waiting for SSH to become available... 00:01:38.669 ==> default: Configuring and enabling network interfaces... 00:01:42.871 default: SSH address: 192.168.121.191:22 00:01:42.871 default: SSH username: vagrant 00:01:42.871 default: SSH auth method: private key 00:01:45.399 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.962 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:58.515 ==> default: Mounting SSHFS shared folder... 00:01:59.451 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.451 ==> default: Checking Mount.. 00:02:00.826 ==> default: Folder Successfully Mounted! 00:02:00.826 ==> default: Running provisioner: file... 00:02:01.761 default: ~/.gitconfig => .gitconfig 00:02:02.018 00:02:02.018 SUCCESS! 00:02:02.018 00:02:02.018 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:02.018 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.018 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:02.018 00:02:02.027 [Pipeline] } 00:02:02.044 [Pipeline] // stage 00:02:02.053 [Pipeline] dir 00:02:02.054 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:02.055 [Pipeline] { 00:02:02.066 [Pipeline] catchError 00:02:02.068 [Pipeline] { 00:02:02.082 [Pipeline] sh 00:02:02.359 + vagrant ssh-config --host vagrant 00:02:02.359 + sed -ne /^Host/,$p 00:02:02.359 + tee ssh_conf 00:02:05.641 Host vagrant 00:02:05.641 HostName 192.168.121.191 00:02:05.641 User vagrant 00:02:05.641 Port 22 00:02:05.641 UserKnownHostsFile /dev/null 00:02:05.641 StrictHostKeyChecking no 00:02:05.641 PasswordAuthentication no 00:02:05.641 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:05.641 IdentitiesOnly yes 00:02:05.641 LogLevel FATAL 00:02:05.641 ForwardAgent yes 00:02:05.641 ForwardX11 yes 00:02:05.641 00:02:05.654 [Pipeline] withEnv 00:02:05.657 [Pipeline] { 00:02:05.673 [Pipeline] sh 00:02:05.950 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.951 source /etc/os-release 00:02:05.951 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.951 # Minimal, systemd-like check. 00:02:05.951 if [[ -e /.dockerenv ]]; then 00:02:05.951 # Clear garbage from the node's name: 00:02:05.951 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.951 # $HOSTNAME is the actual container id 00:02:05.951 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.951 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.951 # We can assume this is a mount from a host where container is running, 00:02:05.951 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.951 container="$(< /etc/hostname) ($agent)" 00:02:05.951 else 00:02:05.951 # Fallback 00:02:05.951 container=$agent 00:02:05.951 fi 00:02:05.951 fi 00:02:05.951 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.951 00:02:06.220 [Pipeline] } 00:02:06.246 [Pipeline] // withEnv 00:02:06.256 [Pipeline] setCustomBuildProperty 00:02:06.275 [Pipeline] stage 00:02:06.277 [Pipeline] { (Tests) 00:02:06.301 [Pipeline] sh 00:02:06.583 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.855 [Pipeline] sh 00:02:07.133 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:07.407 [Pipeline] timeout 00:02:07.407 Timeout set to expire in 40 min 00:02:07.410 [Pipeline] { 00:02:07.427 [Pipeline] sh 00:02:07.706 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.273 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:08.286 [Pipeline] sh 00:02:08.564 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.837 [Pipeline] sh 00:02:09.114 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.388 [Pipeline] sh 00:02:09.712 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:09.971 ++ readlink -f spdk_repo 00:02:09.971 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.971 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.971 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.971 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.971 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.971 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.971 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.971 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:09.971 + cd /home/vagrant/spdk_repo 00:02:09.971 + source /etc/os-release 00:02:09.971 ++ NAME='Fedora Linux' 00:02:09.971 ++ VERSION='38 (Cloud Edition)' 00:02:09.971 ++ ID=fedora 00:02:09.971 ++ VERSION_ID=38 00:02:09.971 ++ VERSION_CODENAME= 00:02:09.971 ++ PLATFORM_ID=platform:f38 00:02:09.971 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:09.971 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.971 ++ LOGO=fedora-logo-icon 00:02:09.971 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:09.971 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.971 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:09.971 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.971 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.971 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.971 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:09.971 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.971 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:09.971 ++ SUPPORT_END=2024-05-14 00:02:09.971 ++ VARIANT='Cloud Edition' 00:02:09.971 ++ VARIANT_ID=cloud 00:02:09.971 + uname -a 00:02:09.971 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:09.971 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.971 Hugepages 00:02:09.971 node hugesize free / total 00:02:09.971 node0 1048576kB 0 / 0 00:02:09.971 node0 2048kB 0 / 0 00:02:09.971 00:02:09.971 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.971 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.971 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.971 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:02:10.230 + rm -f /tmp/spdk-ld-path 00:02:10.230 + source autorun-spdk.conf 00:02:10.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.230 ++ SPDK_TEST_NVMF=1 00:02:10.230 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.230 ++ SPDK_TEST_USDT=1 00:02:10.230 ++ SPDK_RUN_UBSAN=1 00:02:10.230 ++ SPDK_TEST_NVMF_MDNS=1 00:02:10.230 ++ NET_TYPE=virt 00:02:10.230 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:10.230 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.230 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.230 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.230 ++ RUN_NIGHTLY=1 00:02:10.230 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.230 + [[ -n '' ]] 00:02:10.230 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.230 + for M in /var/spdk/build-*-manifest.txt 00:02:10.230 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.230 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.230 + for M in /var/spdk/build-*-manifest.txt 00:02:10.230 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.230 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.230 ++ uname 00:02:10.230 + [[ Linux == \L\i\n\u\x ]] 00:02:10.230 + sudo dmesg -T 00:02:10.230 + sudo dmesg --clear 00:02:10.230 + dmesg_pid=5878 00:02:10.230 + [[ Fedora Linux == FreeBSD ]] 00:02:10.230 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.230 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.230 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.230 + sudo dmesg -Tw 00:02:10.230 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.230 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.230 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.230 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.230 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.230 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.230 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.230 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.230 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.230 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.230 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.230 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.230 Test configuration: 00:02:10.230 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.230 SPDK_TEST_NVMF=1 00:02:10.230 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.230 SPDK_TEST_USDT=1 00:02:10.230 SPDK_RUN_UBSAN=1 00:02:10.230 SPDK_TEST_NVMF_MDNS=1 00:02:10.230 NET_TYPE=virt 00:02:10.230 SPDK_JSONRPC_GO_CLIENT=1 00:02:10.230 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.230 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.230 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.230 RUN_NIGHTLY=1 02:04:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.230 02:04:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.230 02:04:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.230 02:04:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.230 02:04:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.230 02:04:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.230 02:04:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.230 02:04:09 -- paths/export.sh@5 -- $ export PATH 00:02:10.230 02:04:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.230 02:04:09 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.230 02:04:09 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:10.230 02:04:09 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721009049.XXXXXX 00:02:10.230 02:04:09 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721009049.boq7H4 00:02:10.230 02:04:09 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:10.230 02:04:09 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:02:10.230 02:04:09 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.230 02:04:09 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:10.230 02:04:09 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.231 02:04:09 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.231 02:04:09 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:10.231 02:04:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:10.231 02:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.231 02:04:09 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:10.231 02:04:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.231 02:04:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.231 02:04:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.231 02:04:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.231 Mon Jul 15 02:04:09 AM UTC 2024 00:02:10.231 02:04:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.231 LTS-59-g4b94202c6 00:02:10.231 02:04:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.231 02:04:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.231 02:04:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.231 02:04:09 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:10.231 02:04:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.231 02:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.489 ************************************ 00:02:10.489 START TEST ubsan 00:02:10.489 ************************************ 00:02:10.489 using ubsan 00:02:10.489 02:04:09 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:10.489 00:02:10.489 real 0m0.000s 00:02:10.489 user 0m0.000s 00:02:10.489 sys 0m0.000s 00:02:10.489 02:04:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.489 02:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.489 ************************************ 00:02:10.489 END TEST ubsan 00:02:10.489 ************************************ 00:02:10.489 02:04:09 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:10.489 02:04:09 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:10.489 02:04:09 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:10.489 02:04:09 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:10.489 02:04:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.489 ************************************ 00:02:10.489 START TEST build_native_dpdk 00:02:10.489 ************************************ 00:02:10.489 02:04:09 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:10.489 02:04:09 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:10.489 02:04:09 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:10.489 02:04:09 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:10.489 02:04:09 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:10.489 02:04:09 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:10.489 02:04:09 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:10.489 02:04:09 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:10.489 02:04:09 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:10.489 02:04:09 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:10.489 02:04:09 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:10.489 02:04:09 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:10.489 02:04:09 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.489 02:04:09 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:10.489 caf0f5d395 version: 22.11.4 00:02:10.489 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:10.489 dc9c799c7d vhost: fix missing spinlock unlock 00:02:10.489 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:10.489 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:10.489 02:04:09 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:10.489 02:04:09 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:10.489 02:04:09 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:10.489 02:04:09 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:10.489 02:04:09 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:10.489 02:04:09 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:10.489 02:04:09 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:10.489 02:04:09 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:10.489 02:04:09 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:10.489 02:04:09 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:10.489 02:04:09 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:10.489 02:04:09 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:10.489 02:04:09 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:10.489 02:04:09 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:10.489 02:04:09 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:10.489 02:04:09 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:10.489 02:04:09 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:10.489 02:04:09 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:10.489 02:04:09 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:10.489 02:04:09 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:10.489 02:04:09 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:10.489 02:04:09 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:10.489 02:04:09 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:10.489 02:04:09 -- scripts/common.sh@343 -- $ case "$op" in 00:02:10.489 02:04:09 -- scripts/common.sh@344 -- $ : 1 00:02:10.489 02:04:09 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:10.489 02:04:09 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:10.489 02:04:09 -- scripts/common.sh@364 -- $ decimal 22 00:02:10.489 02:04:09 -- scripts/common.sh@352 -- $ local d=22 00:02:10.489 02:04:09 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:10.489 02:04:09 -- scripts/common.sh@354 -- $ echo 22 00:02:10.489 02:04:09 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:10.489 02:04:09 -- scripts/common.sh@365 -- $ decimal 21 00:02:10.489 02:04:09 -- scripts/common.sh@352 -- $ local d=21 00:02:10.490 02:04:09 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:10.490 02:04:09 -- scripts/common.sh@354 -- $ echo 21 00:02:10.490 02:04:09 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:10.490 02:04:09 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:10.490 02:04:09 -- scripts/common.sh@366 -- $ return 1 00:02:10.490 02:04:09 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:10.490 patching file config/rte_config.h 00:02:10.490 Hunk #1 succeeded at 60 (offset 1 line). 00:02:10.490 02:04:09 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:10.490 02:04:09 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:10.490 02:04:09 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:10.490 02:04:09 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:10.490 02:04:09 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.752 The Meson build system 00:02:15.752 Version: 1.3.1 00:02:15.752 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:15.752 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:15.752 Build type: native build 00:02:15.752 Program cat found: YES (/usr/bin/cat) 00:02:15.752 Project name: DPDK 00:02:15.752 Project version: 22.11.4 00:02:15.752 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:15.752 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:15.752 Host machine cpu family: x86_64 00:02:15.752 Host machine cpu: x86_64 00:02:15.752 Message: ## Building in Developer Mode ## 00:02:15.752 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.752 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:15.752 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.752 Program objdump found: YES (/usr/bin/objdump) 00:02:15.752 Program python3 found: YES (/usr/bin/python3) 00:02:15.752 Program cat found: YES (/usr/bin/cat) 00:02:15.752 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:15.752 Checking for size of "void *" : 8 00:02:15.752 Checking for size of "void *" : 8 (cached) 00:02:15.752 Library m found: YES 00:02:15.752 Library numa found: YES 00:02:15.752 Has header "numaif.h" : YES 00:02:15.752 Library fdt found: NO 00:02:15.752 Library execinfo found: NO 00:02:15.752 Has header "execinfo.h" : YES 00:02:15.752 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.752 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.752 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.752 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.752 Run-time dependency openssl found: YES 3.0.9 00:02:15.752 Run-time dependency libpcap found: YES 1.10.4 00:02:15.752 Has header "pcap.h" with dependency libpcap: YES 00:02:15.752 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.752 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.752 Compiler for C supports arguments -Wformat: YES 00:02:15.752 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.752 Compiler for C supports arguments -Wformat-security: NO 00:02:15.752 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.752 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.752 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.752 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.752 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.752 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.752 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.752 Compiler for C supports arguments -Wundef: YES 00:02:15.752 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.752 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.752 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.752 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.752 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.752 Compiler for C supports arguments -mavx512f: YES 00:02:15.752 Checking if "AVX512 checking" compiles: YES 00:02:15.752 Fetching value of define "__SSE4_2__" : 1 00:02:15.752 Fetching value of define "__AES__" : 1 00:02:15.752 Fetching value of define "__AVX__" : 1 00:02:15.752 Fetching value of define "__AVX2__" : 1 00:02:15.752 Fetching value of define "__AVX512BW__" : (undefined) 00:02:15.752 Fetching value of define "__AVX512CD__" : (undefined) 00:02:15.752 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:15.752 Fetching value of define "__AVX512F__" : (undefined) 00:02:15.752 Fetching value of define "__AVX512VL__" : (undefined) 00:02:15.752 Fetching value of define "__PCLMUL__" : 1 00:02:15.752 Fetching value of define "__RDRND__" : 1 00:02:15.752 Fetching value of define "__RDSEED__" : 1 00:02:15.752 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.752 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.752 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.752 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.752 Checking for function "getentropy" : YES 00:02:15.752 Message: lib/eal: Defining dependency "eal" 00:02:15.752 Message: lib/ring: Defining dependency "ring" 00:02:15.752 Message: lib/rcu: Defining dependency "rcu" 00:02:15.752 Message: lib/mempool: Defining dependency "mempool" 00:02:15.752 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.752 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.752 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.752 Compiler for C supports arguments -mpclmul: YES 00:02:15.752 Compiler for C supports arguments -maes: YES 00:02:15.752 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.752 Compiler for C supports arguments -mavx512bw: YES 00:02:15.752 Compiler for C supports arguments -mavx512dq: YES 00:02:15.752 Compiler for C supports arguments -mavx512vl: YES 00:02:15.752 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.752 Compiler for C supports arguments -mavx2: YES 00:02:15.752 Compiler for C supports arguments -mavx: YES 00:02:15.752 Message: lib/net: Defining dependency "net" 00:02:15.752 Message: lib/meter: Defining dependency "meter" 00:02:15.752 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.752 Message: lib/pci: Defining dependency "pci" 00:02:15.752 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.752 Message: lib/metrics: Defining dependency "metrics" 00:02:15.752 Message: lib/hash: Defining dependency "hash" 00:02:15.752 Message: lib/timer: Defining dependency "timer" 00:02:15.752 Fetching value of define "__AVX2__" : 1 (cached) 00:02:15.752 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.752 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:15.752 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:15.752 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:15.752 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:15.752 Message: lib/acl: Defining dependency "acl" 00:02:15.752 Message: lib/bbdev: Defining dependency "bbdev" 00:02:15.752 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:15.752 Run-time dependency libelf found: YES 0.190 00:02:15.752 Message: lib/bpf: Defining dependency "bpf" 00:02:15.752 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:15.752 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.752 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.752 Message: lib/distributor: Defining dependency "distributor" 00:02:15.752 Message: lib/efd: Defining dependency "efd" 00:02:15.752 Message: lib/eventdev: Defining dependency "eventdev" 00:02:15.752 Message: lib/gpudev: Defining dependency "gpudev" 00:02:15.752 Message: lib/gro: Defining dependency "gro" 00:02:15.752 Message: lib/gso: Defining dependency "gso" 00:02:15.752 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:15.752 Message: lib/jobstats: Defining dependency "jobstats" 00:02:15.752 Message: lib/latencystats: Defining dependency "latencystats" 00:02:15.752 Message: lib/lpm: Defining dependency "lpm" 00:02:15.752 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.753 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.753 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:15.753 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:15.753 Message: lib/member: Defining dependency "member" 00:02:15.753 Message: lib/pcapng: Defining dependency "pcapng" 00:02:15.753 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.753 Message: lib/power: Defining dependency "power" 00:02:15.753 Message: lib/rawdev: Defining dependency "rawdev" 00:02:15.753 Message: lib/regexdev: Defining dependency "regexdev" 00:02:15.753 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.753 Message: lib/rib: Defining dependency "rib" 00:02:15.753 Message: lib/reorder: Defining dependency "reorder" 00:02:15.753 Message: lib/sched: Defining dependency "sched" 00:02:15.753 Message: lib/security: Defining dependency "security" 00:02:15.753 Message: lib/stack: Defining dependency "stack" 00:02:15.753 Has header "linux/userfaultfd.h" : YES 00:02:15.753 Message: lib/vhost: Defining dependency "vhost" 00:02:15.753 Message: lib/ipsec: Defining dependency "ipsec" 00:02:15.753 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.753 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:15.753 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:15.753 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.753 Message: lib/fib: Defining dependency "fib" 00:02:15.753 Message: lib/port: Defining dependency "port" 00:02:15.753 Message: lib/pdump: Defining dependency "pdump" 00:02:15.753 Message: lib/table: Defining dependency "table" 00:02:15.753 Message: lib/pipeline: Defining dependency "pipeline" 00:02:15.753 Message: lib/graph: Defining dependency "graph" 00:02:15.753 Message: lib/node: Defining dependency "node" 00:02:15.753 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.753 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.753 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.753 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.753 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:15.753 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.753 Compiler for C supports arguments -Wno-format: YES 00:02:15.753 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.753 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:16.686 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:16.686 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:16.686 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:16.686 Fetching value of define "__AVX2__" : 1 (cached) 00:02:16.686 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.686 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.686 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:16.686 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:16.686 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:16.686 Program doxygen found: YES (/usr/bin/doxygen) 00:02:16.686 Configuring doxy-api.conf using configuration 00:02:16.686 Program sphinx-build found: NO 00:02:16.686 Configuring rte_build_config.h using configuration 00:02:16.686 Message: 00:02:16.686 ================= 00:02:16.686 Applications Enabled 00:02:16.686 ================= 00:02:16.686 00:02:16.686 apps: 00:02:16.686 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:16.686 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:16.686 test-security-perf, 00:02:16.686 00:02:16.686 Message: 00:02:16.686 ================= 00:02:16.686 Libraries Enabled 00:02:16.686 ================= 00:02:16.686 00:02:16.686 libs: 00:02:16.686 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:16.686 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:16.686 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:16.686 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:16.686 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:16.686 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:16.686 table, pipeline, graph, node, 00:02:16.686 00:02:16.686 Message: 00:02:16.686 =============== 00:02:16.686 Drivers Enabled 00:02:16.687 =============== 00:02:16.687 00:02:16.687 common: 00:02:16.687 00:02:16.687 bus: 00:02:16.687 pci, vdev, 00:02:16.687 mempool: 00:02:16.687 ring, 00:02:16.687 dma: 00:02:16.687 00:02:16.687 net: 00:02:16.687 i40e, 00:02:16.687 raw: 00:02:16.687 00:02:16.687 crypto: 00:02:16.687 00:02:16.687 compress: 00:02:16.687 00:02:16.687 regex: 00:02:16.687 00:02:16.687 vdpa: 00:02:16.687 00:02:16.687 event: 00:02:16.687 00:02:16.687 baseband: 00:02:16.687 00:02:16.687 gpu: 00:02:16.687 00:02:16.687 00:02:16.687 Message: 00:02:16.687 ================= 00:02:16.687 Content Skipped 00:02:16.687 ================= 00:02:16.687 00:02:16.687 apps: 00:02:16.687 00:02:16.687 libs: 00:02:16.687 kni: explicitly disabled via build config (deprecated lib) 00:02:16.687 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:16.687 00:02:16.687 drivers: 00:02:16.687 common/cpt: not in enabled drivers build config 00:02:16.687 common/dpaax: not in enabled drivers build config 00:02:16.687 common/iavf: not in enabled drivers build config 00:02:16.687 common/idpf: not in enabled drivers build config 00:02:16.687 common/mvep: not in enabled drivers build config 00:02:16.687 common/octeontx: not in enabled drivers build config 00:02:16.687 bus/auxiliary: not in enabled drivers build config 00:02:16.687 bus/dpaa: not in enabled drivers build config 00:02:16.687 bus/fslmc: not in enabled drivers build config 00:02:16.687 bus/ifpga: not in enabled drivers build config 00:02:16.687 bus/vmbus: not in enabled drivers build config 00:02:16.687 common/cnxk: not in enabled drivers build config 00:02:16.687 common/mlx5: not in enabled drivers build config 00:02:16.687 common/qat: not in enabled drivers build config 00:02:16.687 common/sfc_efx: not in enabled drivers build config 00:02:16.687 mempool/bucket: not in enabled drivers build config 00:02:16.687 mempool/cnxk: not in enabled drivers build config 00:02:16.687 mempool/dpaa: not in enabled drivers build config 00:02:16.687 mempool/dpaa2: not in enabled drivers build config 00:02:16.687 mempool/octeontx: not in enabled drivers build config 00:02:16.687 mempool/stack: not in enabled drivers build config 00:02:16.687 dma/cnxk: not in enabled drivers build config 00:02:16.687 dma/dpaa: not in enabled drivers build config 00:02:16.687 dma/dpaa2: not in enabled drivers build config 00:02:16.687 dma/hisilicon: not in enabled drivers build config 00:02:16.687 dma/idxd: not in enabled drivers build config 00:02:16.687 dma/ioat: not in enabled drivers build config 00:02:16.687 dma/skeleton: not in enabled drivers build config 00:02:16.687 net/af_packet: not in enabled drivers build config 00:02:16.687 net/af_xdp: not in enabled drivers build config 00:02:16.687 net/ark: not in enabled drivers build config 00:02:16.687 net/atlantic: not in enabled drivers build config 00:02:16.687 net/avp: not in enabled drivers build config 00:02:16.687 net/axgbe: not in enabled drivers build config 00:02:16.687 net/bnx2x: not in enabled drivers build config 00:02:16.687 net/bnxt: not in enabled drivers build config 00:02:16.687 net/bonding: not in enabled drivers build config 00:02:16.687 net/cnxk: not in enabled drivers build config 00:02:16.687 net/cxgbe: not in enabled drivers build config 00:02:16.687 net/dpaa: not in enabled drivers build config 00:02:16.687 net/dpaa2: not in enabled drivers build config 00:02:16.687 net/e1000: not in enabled drivers build config 00:02:16.687 net/ena: not in enabled drivers build config 00:02:16.687 net/enetc: not in enabled drivers build config 00:02:16.687 net/enetfec: not in enabled drivers build config 00:02:16.687 net/enic: not in enabled drivers build config 00:02:16.687 net/failsafe: not in enabled drivers build config 00:02:16.687 net/fm10k: not in enabled drivers build config 00:02:16.687 net/gve: not in enabled drivers build config 00:02:16.687 net/hinic: not in enabled drivers build config 00:02:16.687 net/hns3: not in enabled drivers build config 00:02:16.687 net/iavf: not in enabled drivers build config 00:02:16.687 net/ice: not in enabled drivers build config 00:02:16.687 net/idpf: not in enabled drivers build config 00:02:16.687 net/igc: not in enabled drivers build config 00:02:16.687 net/ionic: not in enabled drivers build config 00:02:16.687 net/ipn3ke: not in enabled drivers build config 00:02:16.687 net/ixgbe: not in enabled drivers build config 00:02:16.687 net/kni: not in enabled drivers build config 00:02:16.687 net/liquidio: not in enabled drivers build config 00:02:16.687 net/mana: not in enabled drivers build config 00:02:16.687 net/memif: not in enabled drivers build config 00:02:16.687 net/mlx4: not in enabled drivers build config 00:02:16.687 net/mlx5: not in enabled drivers build config 00:02:16.687 net/mvneta: not in enabled drivers build config 00:02:16.687 net/mvpp2: not in enabled drivers build config 00:02:16.687 net/netvsc: not in enabled drivers build config 00:02:16.687 net/nfb: not in enabled drivers build config 00:02:16.687 net/nfp: not in enabled drivers build config 00:02:16.687 net/ngbe: not in enabled drivers build config 00:02:16.687 net/null: not in enabled drivers build config 00:02:16.687 net/octeontx: not in enabled drivers build config 00:02:16.687 net/octeon_ep: not in enabled drivers build config 00:02:16.687 net/pcap: not in enabled drivers build config 00:02:16.687 net/pfe: not in enabled drivers build config 00:02:16.687 net/qede: not in enabled drivers build config 00:02:16.687 net/ring: not in enabled drivers build config 00:02:16.687 net/sfc: not in enabled drivers build config 00:02:16.687 net/softnic: not in enabled drivers build config 00:02:16.687 net/tap: not in enabled drivers build config 00:02:16.687 net/thunderx: not in enabled drivers build config 00:02:16.687 net/txgbe: not in enabled drivers build config 00:02:16.687 net/vdev_netvsc: not in enabled drivers build config 00:02:16.687 net/vhost: not in enabled drivers build config 00:02:16.687 net/virtio: not in enabled drivers build config 00:02:16.687 net/vmxnet3: not in enabled drivers build config 00:02:16.687 raw/cnxk_bphy: not in enabled drivers build config 00:02:16.687 raw/cnxk_gpio: not in enabled drivers build config 00:02:16.687 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:16.687 raw/ifpga: not in enabled drivers build config 00:02:16.687 raw/ntb: not in enabled drivers build config 00:02:16.687 raw/skeleton: not in enabled drivers build config 00:02:16.687 crypto/armv8: not in enabled drivers build config 00:02:16.687 crypto/bcmfs: not in enabled drivers build config 00:02:16.687 crypto/caam_jr: not in enabled drivers build config 00:02:16.687 crypto/ccp: not in enabled drivers build config 00:02:16.687 crypto/cnxk: not in enabled drivers build config 00:02:16.687 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.687 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.687 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.687 crypto/mlx5: not in enabled drivers build config 00:02:16.687 crypto/mvsam: not in enabled drivers build config 00:02:16.687 crypto/nitrox: not in enabled drivers build config 00:02:16.687 crypto/null: not in enabled drivers build config 00:02:16.687 crypto/octeontx: not in enabled drivers build config 00:02:16.687 crypto/openssl: not in enabled drivers build config 00:02:16.687 crypto/scheduler: not in enabled drivers build config 00:02:16.687 crypto/uadk: not in enabled drivers build config 00:02:16.687 crypto/virtio: not in enabled drivers build config 00:02:16.687 compress/isal: not in enabled drivers build config 00:02:16.687 compress/mlx5: not in enabled drivers build config 00:02:16.687 compress/octeontx: not in enabled drivers build config 00:02:16.687 compress/zlib: not in enabled drivers build config 00:02:16.687 regex/mlx5: not in enabled drivers build config 00:02:16.687 regex/cn9k: not in enabled drivers build config 00:02:16.687 vdpa/ifc: not in enabled drivers build config 00:02:16.687 vdpa/mlx5: not in enabled drivers build config 00:02:16.687 vdpa/sfc: not in enabled drivers build config 00:02:16.687 event/cnxk: not in enabled drivers build config 00:02:16.687 event/dlb2: not in enabled drivers build config 00:02:16.687 event/dpaa: not in enabled drivers build config 00:02:16.687 event/dpaa2: not in enabled drivers build config 00:02:16.687 event/dsw: not in enabled drivers build config 00:02:16.687 event/opdl: not in enabled drivers build config 00:02:16.687 event/skeleton: not in enabled drivers build config 00:02:16.688 event/sw: not in enabled drivers build config 00:02:16.688 event/octeontx: not in enabled drivers build config 00:02:16.688 baseband/acc: not in enabled drivers build config 00:02:16.688 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:16.688 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:16.688 baseband/la12xx: not in enabled drivers build config 00:02:16.688 baseband/null: not in enabled drivers build config 00:02:16.688 baseband/turbo_sw: not in enabled drivers build config 00:02:16.688 gpu/cuda: not in enabled drivers build config 00:02:16.688 00:02:16.688 00:02:16.688 Build targets in project: 314 00:02:16.688 00:02:16.688 DPDK 22.11.4 00:02:16.688 00:02:16.688 User defined options 00:02:16.688 libdir : lib 00:02:16.688 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:16.688 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:16.688 c_link_args : 00:02:16.688 enable_docs : false 00:02:16.688 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:16.688 enable_kmods : false 00:02:16.688 machine : native 00:02:16.688 tests : false 00:02:16.688 00:02:16.688 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.688 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:16.945 02:04:16 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:16.945 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:16.945 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:16.945 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:16.945 [3/743] Generating lib/rte_kvargs_def with a custom command 00:02:16.945 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:16.945 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.945 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.203 [7/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.203 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.203 [9/743] Linking static target lib/librte_kvargs.a 00:02:17.203 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.203 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.203 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.203 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.203 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.203 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.203 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.203 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.203 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.203 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.203 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.461 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:17.461 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.461 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:17.461 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.461 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.461 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.461 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.461 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.461 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.719 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.719 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.719 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.719 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.719 [34/743] Linking static target lib/librte_telemetry.a 00:02:17.719 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.719 [36/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:17.719 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.719 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.719 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.719 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.719 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.978 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.978 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.978 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.978 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.978 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:17.978 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.236 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.236 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.236 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.236 [51/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:18.236 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.236 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.236 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.236 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.236 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.236 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.236 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.236 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.236 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.236 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.236 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.236 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.494 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.494 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:18.494 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:18.494 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:18.494 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:18.494 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.494 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:18.494 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:18.494 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.494 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:18.494 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.494 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:18.494 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:18.494 [77/743] Generating lib/rte_eal_def with a custom command 00:02:18.494 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:18.494 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.752 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.752 [81/743] Generating lib/rte_ring_mingw with a custom command 00:02:18.752 [82/743] Generating lib/rte_ring_def with a custom command 00:02:18.752 [83/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:18.752 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.752 [85/743] Generating lib/rte_rcu_def with a custom command 00:02:18.752 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:18.752 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.752 [88/743] Linking static target lib/librte_ring.a 00:02:18.752 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:18.752 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:18.752 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:19.010 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.010 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.010 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.268 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:19.268 [96/743] Linking static target lib/librte_eal.a 00:02:19.268 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:19.268 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:19.268 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:19.268 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:19.268 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:19.525 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:19.525 [103/743] Linking static target lib/librte_rcu.a 00:02:19.525 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:19.525 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:19.782 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:19.782 [107/743] Linking static target lib/librte_mempool.a 00:02:19.782 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.782 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:19.782 [110/743] Generating lib/rte_net_def with a custom command 00:02:19.782 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:19.782 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:19.782 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:20.039 [114/743] Generating lib/rte_meter_def with a custom command 00:02:20.039 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:20.039 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:20.039 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:20.039 [118/743] Linking static target lib/librte_meter.a 00:02:20.039 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.296 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:20.296 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.297 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:20.555 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:20.555 [124/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.555 [125/743] Linking static target lib/librte_net.a 00:02:20.555 [126/743] Linking static target lib/librte_mbuf.a 00:02:20.555 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.813 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.813 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.813 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.813 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.813 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.071 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.071 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.329 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.588 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.588 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:21.588 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.588 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:21.588 [140/743] Generating lib/rte_pci_def with a custom command 00:02:21.588 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:21.588 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.588 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.588 [144/743] Linking static target lib/librte_pci.a 00:02:21.588 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.588 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.588 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.846 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.846 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.846 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.846 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.846 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.846 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.104 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.104 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.104 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.104 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:22.104 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:22.104 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.104 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:22.104 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.104 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:22.104 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.104 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:22.104 [165/743] Generating lib/rte_hash_def with a custom command 00:02:22.104 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:22.389 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.389 [168/743] Generating lib/rte_timer_def with a custom command 00:02:22.389 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:22.389 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.389 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.389 [172/743] Linking static target lib/librte_cmdline.a 00:02:22.656 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:22.656 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:22.656 [175/743] Linking static target lib/librte_metrics.a 00:02:22.656 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.656 [177/743] Linking static target lib/librte_timer.a 00:02:23.222 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.222 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.222 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:23.222 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.222 [182/743] Linking static target lib/librte_ethdev.a 00:02:23.222 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.480 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:23.739 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:23.739 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:23.739 [187/743] Generating lib/rte_acl_def with a custom command 00:02:23.739 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:23.739 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:23.997 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:23.997 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:23.997 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:23.997 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:24.254 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:24.511 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.511 [196/743] Linking static target lib/librte_bitratestats.a 00:02:24.768 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.768 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.768 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:24.768 [200/743] Linking static target lib/librte_bbdev.a 00:02:24.768 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:25.025 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.025 [203/743] Linking static target lib/librte_hash.a 00:02:25.283 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:25.283 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.283 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:25.283 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:25.283 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:25.540 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:25.798 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.798 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.798 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:25.798 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:25.798 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:25.798 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:26.056 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:26.056 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:26.056 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:26.056 [219/743] Linking static target lib/librte_cfgfile.a 00:02:26.313 [220/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:26.313 [221/743] Generating lib/rte_compressdev_def with a custom command 00:02:26.313 [222/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:26.313 [223/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:26.313 [224/743] Linking static target lib/librte_acl.a 00:02:26.571 [225/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.571 [226/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:26.571 [227/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:26.571 [228/743] Generating lib/rte_cryptodev_def with a custom command 00:02:26.571 [229/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:26.571 [230/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.828 [231/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.828 [232/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.828 [233/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:26.828 [234/743] Linking target lib/librte_eal.so.23.0 00:02:26.828 [235/743] Linking static target lib/librte_bpf.a 00:02:26.828 [236/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:26.828 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.828 [238/743] Linking target lib/librte_ring.so.23.0 00:02:26.828 [239/743] Linking target lib/librte_meter.so.23.0 00:02:27.085 [240/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:27.085 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:27.085 [242/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.085 [243/743] Linking target lib/librte_rcu.so.23.0 00:02:27.085 [244/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.085 [245/743] Linking target lib/librte_mempool.so.23.0 00:02:27.085 [246/743] Linking target lib/librte_pci.so.23.0 00:02:27.085 [247/743] Linking target lib/librte_timer.so.23.0 00:02:27.343 [248/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:27.343 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:27.343 [250/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:27.343 [251/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:27.343 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:02:27.343 [253/743] Linking target lib/librte_mbuf.so.23.0 00:02:27.343 [254/743] Linking static target lib/librte_compressdev.a 00:02:27.343 [255/743] Linking target lib/librte_acl.so.23.0 00:02:27.343 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:27.343 [257/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:27.343 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:27.343 [259/743] Generating lib/rte_efd_def with a custom command 00:02:27.343 [260/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.343 [261/743] Generating lib/rte_efd_mingw with a custom command 00:02:27.343 [262/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:27.343 [263/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:27.343 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:27.343 [265/743] Linking target lib/librte_net.so.23.0 00:02:27.343 [266/743] Linking target lib/librte_bbdev.so.23.0 00:02:27.601 [267/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:27.601 [268/743] Linking target lib/librte_cmdline.so.23.0 00:02:27.601 [269/743] Linking target lib/librte_hash.so.23.0 00:02:27.601 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:27.601 [271/743] Linking static target lib/librte_distributor.a 00:02:27.858 [272/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:27.858 [273/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.858 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.858 [275/743] Linking target lib/librte_distributor.so.23.0 00:02:28.114 [276/743] Linking target lib/librte_ethdev.so.23.0 00:02:28.114 [277/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:28.114 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:28.114 [279/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.114 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:28.114 [281/743] Linking target lib/librte_compressdev.so.23.0 00:02:28.114 [282/743] Linking target lib/librte_metrics.so.23.0 00:02:28.114 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:28.372 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:28.372 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:02:28.372 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:28.372 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:28.372 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:28.372 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:28.372 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:28.629 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:28.887 [292/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:28.887 [293/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:28.887 [294/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.887 [295/743] Linking static target lib/librte_efd.a 00:02:28.887 [296/743] Linking static target lib/librte_cryptodev.a 00:02:29.143 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.143 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:29.143 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:29.143 [300/743] Linking target lib/librte_efd.so.23.0 00:02:29.143 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:29.143 [302/743] Linking static target lib/librte_gpudev.a 00:02:29.143 [303/743] Generating lib/rte_gro_def with a custom command 00:02:29.400 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:29.400 [305/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:29.400 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:29.656 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:29.656 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:29.913 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:29.913 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:29.913 [311/743] Generating lib/rte_gso_def with a custom command 00:02:29.913 [312/743] Generating lib/rte_gso_mingw with a custom command 00:02:30.171 [313/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.171 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:30.171 [315/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:30.171 [316/743] Linking static target lib/librte_gro.a 00:02:30.171 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:30.171 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:30.171 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:30.171 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.171 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:30.429 [322/743] Linking target lib/librte_gro.so.23.0 00:02:30.429 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:30.429 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:30.429 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:30.429 [326/743] Linking static target lib/librte_eventdev.a 00:02:30.429 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:30.429 [328/743] Linking static target lib/librte_jobstats.a 00:02:30.429 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:30.686 [330/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:30.686 [331/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:30.686 [332/743] Linking static target lib/librte_gso.a 00:02:30.686 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.686 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:30.686 [335/743] Linking target lib/librte_gso.so.23.0 00:02:30.943 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:30.943 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:30.943 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:30.943 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.943 [340/743] Linking target lib/librte_jobstats.so.23.0 00:02:30.943 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:30.943 [342/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:30.943 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:30.943 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:30.943 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.943 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:30.943 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:31.200 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:31.200 [349/743] Linking static target lib/librte_ip_frag.a 00:02:31.200 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:31.458 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.458 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:31.458 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:31.458 [354/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:31.458 [355/743] Linking static target lib/librte_latencystats.a 00:02:31.714 [356/743] Generating lib/rte_member_def with a custom command 00:02:31.714 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:31.714 [358/743] Generating lib/rte_member_mingw with a custom command 00:02:31.714 [359/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:31.714 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:31.714 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:31.714 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:31.714 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:31.714 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.714 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.714 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.714 [367/743] Linking target lib/librte_latencystats.so.23.0 00:02:31.971 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:31.971 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.971 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.229 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:32.229 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:32.229 [373/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.229 [374/743] Generating lib/rte_power_def with a custom command 00:02:32.229 [375/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:32.229 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:32.229 [377/743] Linking static target lib/librte_lpm.a 00:02:32.487 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:32.487 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.487 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:32.487 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:32.487 [382/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:32.487 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:32.487 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:32.487 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.487 [386/743] Generating lib/rte_dmadev_def with a custom command 00:02:32.745 [387/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:32.745 [388/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:32.745 [389/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.745 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:32.745 [391/743] Linking static target lib/librte_pcapng.a 00:02:32.745 [392/743] Linking target lib/librte_lpm.so.23.0 00:02:32.745 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.745 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:32.745 [395/743] Linking static target lib/librte_rawdev.a 00:02:32.745 [396/743] Generating lib/rte_rib_def with a custom command 00:02:32.745 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:32.745 [398/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:32.745 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:33.006 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:33.006 [401/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.006 [402/743] Linking static target lib/librte_power.a 00:02:33.006 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.006 [404/743] Linking static target lib/librte_dmadev.a 00:02:33.006 [405/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.006 [406/743] Linking target lib/librte_pcapng.so.23.0 00:02:33.265 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:33.265 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.265 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:33.265 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:33.265 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:33.265 [412/743] Linking static target lib/librte_regexdev.a 00:02:33.265 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:33.265 [414/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:33.265 [415/743] Linking static target lib/librte_member.a 00:02:33.265 [416/743] Generating lib/rte_sched_def with a custom command 00:02:33.265 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:33.265 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:33.524 [419/743] Generating lib/rte_security_def with a custom command 00:02:33.524 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:33.524 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.524 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:33.524 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:33.524 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:33.524 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:33.524 [426/743] Generating lib/rte_stack_def with a custom command 00:02:33.524 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.524 [428/743] Linking static target lib/librte_reorder.a 00:02:33.524 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:33.524 [430/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:33.784 [431/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:33.784 [432/743] Linking static target lib/librte_stack.a 00:02:33.784 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.784 [434/743] Linking target lib/librte_member.so.23.0 00:02:33.784 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.784 [436/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:33.784 [437/743] Linking static target lib/librte_rib.a 00:02:33.784 [438/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.784 [439/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.784 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.042 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:34.042 [442/743] Linking target lib/librte_stack.so.23.0 00:02:34.042 [443/743] Linking target lib/librte_power.so.23.0 00:02:34.042 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.042 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:34.299 [446/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.299 [447/743] Linking target lib/librte_rib.so.23.0 00:02:34.299 [448/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.299 [449/743] Linking static target lib/librte_security.a 00:02:34.299 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:34.556 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.556 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:34.556 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:34.556 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.814 [455/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.814 [456/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.814 [457/743] Linking target lib/librte_security.so.23.0 00:02:34.814 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:34.814 [459/743] Linking static target lib/librte_sched.a 00:02:34.814 [460/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:35.380 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.380 [462/743] Linking target lib/librte_sched.so.23.0 00:02:35.380 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:35.380 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.380 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:35.380 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:35.380 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:35.380 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:35.380 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.638 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:35.638 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:35.897 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:35.897 [473/743] Generating lib/rte_fib_def with a custom command 00:02:36.156 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:36.156 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:36.156 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:36.156 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:36.156 [478/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:36.156 [479/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:36.156 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:36.414 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:36.414 [482/743] Linking static target lib/librte_ipsec.a 00:02:36.673 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.673 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:36.931 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:36.931 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:36.931 [487/743] Linking static target lib/librte_fib.a 00:02:36.931 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:36.931 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:36.931 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:36.931 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:37.190 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.190 [493/743] Linking target lib/librte_fib.so.23.0 00:02:37.448 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:38.015 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:38.015 [496/743] Generating lib/rte_port_def with a custom command 00:02:38.015 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:38.015 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:38.015 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:38.015 [500/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:38.015 [501/743] Generating lib/rte_pdump_def with a custom command 00:02:38.015 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:38.015 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:38.274 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:38.274 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:38.274 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:38.533 [507/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:38.533 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:38.533 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:38.533 [510/743] Linking static target lib/librte_port.a 00:02:38.800 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:38.800 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:39.058 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:39.058 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.058 [515/743] Linking target lib/librte_port.so.23.0 00:02:39.058 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:39.058 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:39.058 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:39.316 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:39.316 [520/743] Linking static target lib/librte_pdump.a 00:02:39.573 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.573 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:39.831 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:39.831 [524/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:39.831 [525/743] Generating lib/rte_table_def with a custom command 00:02:39.831 [526/743] Generating lib/rte_table_mingw with a custom command 00:02:39.831 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:39.831 [528/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.088 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:40.088 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:40.088 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:40.346 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:40.346 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:40.346 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:40.346 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:40.346 [536/743] Linking static target lib/librte_table.a 00:02:40.603 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:40.861 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:40.861 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.861 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:41.119 [541/743] Linking target lib/librte_table.so.23.0 00:02:41.119 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:41.119 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:41.119 [544/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:41.119 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:41.119 [546/743] Generating lib/rte_graph_def with a custom command 00:02:41.377 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:41.377 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:41.635 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:41.635 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:41.635 [551/743] Linking static target lib/librte_graph.a 00:02:41.892 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:42.150 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:42.150 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:42.150 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:42.408 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:42.408 [557/743] Generating lib/rte_node_def with a custom command 00:02:42.408 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:42.408 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:42.408 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.408 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:42.408 [562/743] Linking target lib/librte_graph.so.23.0 00:02:42.665 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:42.665 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:42.665 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.665 [566/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:42.665 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.665 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:42.665 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:42.922 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.922 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:42.922 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.922 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:42.922 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:42.922 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:42.922 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:42.922 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:42.922 [578/743] Linking static target lib/librte_node.a 00:02:43.180 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:43.180 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:43.180 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:43.180 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.180 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:43.180 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.180 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:43.180 [586/743] Linking target lib/librte_node.so.23.0 00:02:43.438 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:43.438 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:43.438 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:43.438 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.438 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:43.438 [592/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.438 [593/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:43.438 [594/743] Linking static target drivers/librte_bus_pci.a 00:02:43.695 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:43.695 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:43.952 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.952 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:43.952 [599/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:43.952 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:43.952 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:43.952 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:44.210 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.210 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.468 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:44.468 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.468 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.468 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:44.468 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.468 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:44.726 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:45.291 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:45.291 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:45.291 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:45.857 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:45.857 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:45.857 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:46.424 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:46.424 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:46.424 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:46.681 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:46.681 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:46.681 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:46.681 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:46.681 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:47.617 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:48.182 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:48.182 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:48.182 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:48.182 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:48.182 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:48.182 [632/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:48.182 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:48.182 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:48.438 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:48.695 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:48.953 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:48.953 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:48.953 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:49.210 [640/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.210 [641/743] Linking static target lib/librte_vhost.a 00:02:49.210 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:49.468 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.468 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:49.468 [645/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.468 [646/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:49.468 [647/743] Linking static target drivers/librte_net_i40e.a 00:02:49.468 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:49.468 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:49.726 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:49.984 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.984 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:49.985 [653/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:50.243 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:50.243 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:50.501 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:50.501 [657/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.501 [658/743] Linking target lib/librte_vhost.so.23.0 00:02:50.501 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:50.760 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:51.018 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:51.018 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:51.018 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:51.018 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:51.277 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:51.277 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:51.277 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:51.277 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:51.277 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:51.549 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:51.811 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:52.103 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:52.103 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:52.687 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:52.687 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:52.687 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:52.687 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:52.945 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:53.204 [679/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:53.204 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:53.204 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:53.463 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:53.463 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:53.721 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:53.721 [685/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:53.722 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:53.722 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:53.979 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:54.237 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:54.237 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:54.237 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:54.237 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:54.237 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:54.237 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:54.802 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:54.803 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:55.061 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:55.061 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:55.061 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:55.629 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:55.629 [701/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:55.629 [702/743] Linking static target lib/librte_pipeline.a 00:02:55.629 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:55.888 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:55.888 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.147 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:56.147 [707/743] Linking target app/dpdk-dumpcap 00:02:56.147 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:56.405 [709/743] Linking target app/dpdk-pdump 00:02:56.405 [710/743] Linking target app/dpdk-proc-info 00:02:56.405 [711/743] Linking target app/dpdk-test-acl 00:02:56.405 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:56.663 [713/743] Linking target app/dpdk-test-bbdev 00:02:56.663 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:56.663 [715/743] Linking target app/dpdk-test-cmdline 00:02:56.922 [716/743] Linking target app/dpdk-test-compress-perf 00:02:56.922 [717/743] Linking target app/dpdk-test-crypto-perf 00:02:56.922 [718/743] Linking target app/dpdk-test-eventdev 00:02:56.922 [719/743] Linking target app/dpdk-test-fib 00:02:57.180 [720/743] Linking target app/dpdk-test-flow-perf 00:02:57.180 [721/743] Linking target app/dpdk-test-gpudev 00:02:57.180 [722/743] Linking target app/dpdk-test-pipeline 00:02:57.438 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:57.696 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:57.696 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:57.696 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:58.262 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:58.262 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:58.262 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:58.262 [730/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.262 [731/743] Linking target lib/librte_pipeline.so.23.0 00:02:58.521 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:58.521 [733/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.780 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.780 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.780 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:59.038 [737/743] Linking target app/dpdk-test-sad 00:02:59.296 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.296 [739/743] Linking target app/dpdk-test-regex 00:02:59.296 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:59.554 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:59.812 [742/743] Linking target app/dpdk-testpmd 00:03:00.070 [743/743] Linking target app/dpdk-test-security-perf 00:03:00.070 02:04:59 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:00.071 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.071 [0/1] Installing files. 00:03:00.333 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.333 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.336 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.337 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.597 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.598 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.598 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.599 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:00.599 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.599 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.884 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.885 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.886 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.887 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:00.887 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:00.887 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:00.887 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:00.887 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:00.887 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:00.887 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:00.887 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:00.887 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:00.887 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:00.887 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:00.887 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:00.887 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:00.887 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:00.887 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:00.887 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:00.887 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:00.887 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:00.887 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:00.887 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:00.887 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:00.887 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:00.887 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:00.887 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:00.887 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:00.887 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:00.887 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:00.887 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:00.887 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:00.887 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:00.887 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:00.887 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:00.887 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:00.887 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:00.887 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:00.887 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:00.887 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:00.887 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:00.887 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:00.887 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:00.887 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:00.887 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:00.887 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:00.887 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:00.887 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:00.887 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:00.887 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:00.887 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:00.887 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:00.887 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:00.887 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:00.887 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:00.887 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:00.887 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:00.887 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:00.887 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:00.887 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:00.887 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:00.887 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:00.887 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:00.887 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:00.887 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:00.887 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:00.887 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:00.887 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:00.887 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:00.887 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:00.887 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:00.887 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:00.887 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:00.887 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:00.887 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:00.887 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:00.887 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:00.887 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:00.887 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:00.887 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:00.887 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:00.887 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:00.887 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:00.887 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:00.887 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:00.887 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:00.887 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:00.887 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:00.887 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:00.887 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:00.887 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:00.887 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:00.887 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:00.887 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:00.888 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:00.888 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:00.888 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:00.888 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:00.888 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:00.888 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:00.888 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:00.888 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:00.888 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:00.888 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:00.888 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:00.888 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:00.888 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:00.888 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:00.888 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:00.888 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:00.888 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:00.888 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:00.888 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:00.888 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:00.888 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:00.888 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:00.888 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:00.888 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:00.888 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:00.888 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:00.888 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:00.888 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:00.888 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:00.888 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:00.888 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:00.888 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:00.888 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:00.888 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:00.888 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:00.888 02:05:00 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:00.888 02:05:00 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.888 02:05:00 -- common/autobuild_common.sh@200 -- $ cat 00:03:00.888 02:05:00 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.888 00:03:00.888 real 0m50.520s 00:03:00.888 user 5m59.182s 00:03:00.888 sys 0m58.876s 00:03:00.888 02:05:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:00.888 ************************************ 00:03:00.888 END TEST build_native_dpdk 00:03:00.888 ************************************ 00:03:00.888 02:05:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.888 02:05:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.888 02:05:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.888 02:05:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:01.153 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.153 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.153 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.153 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:01.721 Using 'verbs' RDMA provider 00:03:17.153 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:29.345 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:29.345 go version go1.21.1 linux/amd64 00:03:29.345 Creating mk/config.mk...done. 00:03:29.345 Creating mk/cc.flags.mk...done. 00:03:29.345 Type 'make' to build. 00:03:29.345 02:05:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:29.345 02:05:27 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:29.345 02:05:27 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:29.345 02:05:27 -- common/autotest_common.sh@10 -- $ set +x 00:03:29.345 ************************************ 00:03:29.345 START TEST make 00:03:29.345 ************************************ 00:03:29.345 02:05:27 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:29.345 make[1]: Nothing to be done for 'all'. 00:03:55.879 CC lib/log/log.o 00:03:55.879 CC lib/ut_mock/mock.o 00:03:55.879 CC lib/ut/ut.o 00:03:55.879 CC lib/log/log_flags.o 00:03:55.879 CC lib/log/log_deprecated.o 00:03:55.879 LIB libspdk_ut_mock.a 00:03:55.879 LIB libspdk_ut.a 00:03:55.879 LIB libspdk_log.a 00:03:55.879 SO libspdk_ut_mock.so.5.0 00:03:55.879 SO libspdk_ut.so.1.0 00:03:55.879 SO libspdk_log.so.6.1 00:03:55.879 SYMLINK libspdk_ut_mock.so 00:03:55.879 SYMLINK libspdk_ut.so 00:03:55.879 SYMLINK libspdk_log.so 00:03:55.879 CC lib/util/base64.o 00:03:55.879 CC lib/util/bit_array.o 00:03:55.879 CC lib/util/cpuset.o 00:03:55.879 CC lib/util/crc32.o 00:03:55.879 CC lib/util/crc16.o 00:03:55.879 CC lib/util/crc32c.o 00:03:55.879 CC lib/dma/dma.o 00:03:55.879 CXX lib/trace_parser/trace.o 00:03:55.879 CC lib/ioat/ioat.o 00:03:55.879 CC lib/vfio_user/host/vfio_user_pci.o 00:03:55.879 CC lib/util/crc32_ieee.o 00:03:55.879 CC lib/util/crc64.o 00:03:55.879 CC lib/util/dif.o 00:03:55.879 CC lib/vfio_user/host/vfio_user.o 00:03:55.879 CC lib/util/fd.o 00:03:55.879 LIB libspdk_dma.a 00:03:55.879 CC lib/util/file.o 00:03:55.879 SO libspdk_dma.so.3.0 00:03:55.879 CC lib/util/hexlify.o 00:03:55.879 SYMLINK libspdk_dma.so 00:03:55.879 CC lib/util/iov.o 00:03:55.879 CC lib/util/math.o 00:03:55.879 LIB libspdk_ioat.a 00:03:55.879 SO libspdk_ioat.so.6.0 00:03:55.879 CC lib/util/pipe.o 00:03:55.879 CC lib/util/strerror_tls.o 00:03:55.880 CC lib/util/string.o 00:03:55.880 SYMLINK libspdk_ioat.so 00:03:55.880 CC lib/util/uuid.o 00:03:55.880 LIB libspdk_vfio_user.a 00:03:55.880 CC lib/util/fd_group.o 00:03:55.880 SO libspdk_vfio_user.so.4.0 00:03:55.880 CC lib/util/xor.o 00:03:55.880 CC lib/util/zipf.o 00:03:55.880 SYMLINK libspdk_vfio_user.so 00:03:55.880 LIB libspdk_util.a 00:03:55.880 SO libspdk_util.so.8.0 00:03:55.880 SYMLINK libspdk_util.so 00:03:55.880 LIB libspdk_trace_parser.a 00:03:55.880 SO libspdk_trace_parser.so.4.0 00:03:55.880 CC lib/json/json_parse.o 00:03:55.880 CC lib/json/json_util.o 00:03:55.880 CC lib/rdma/common.o 00:03:55.880 CC lib/json/json_write.o 00:03:55.880 CC lib/rdma/rdma_verbs.o 00:03:55.880 CC lib/conf/conf.o 00:03:55.880 CC lib/idxd/idxd.o 00:03:55.880 CC lib/env_dpdk/env.o 00:03:55.880 CC lib/vmd/vmd.o 00:03:55.880 SYMLINK libspdk_trace_parser.so 00:03:55.880 CC lib/env_dpdk/memory.o 00:03:55.880 CC lib/idxd/idxd_user.o 00:03:55.880 LIB libspdk_conf.a 00:03:55.880 CC lib/idxd/idxd_kernel.o 00:03:55.880 CC lib/env_dpdk/pci.o 00:03:55.880 SO libspdk_conf.so.5.0 00:03:55.880 LIB libspdk_rdma.a 00:03:55.880 LIB libspdk_json.a 00:03:55.880 SO libspdk_rdma.so.5.0 00:03:55.880 SYMLINK libspdk_conf.so 00:03:55.880 SO libspdk_json.so.5.1 00:03:55.880 CC lib/env_dpdk/init.o 00:03:55.880 SYMLINK libspdk_rdma.so 00:03:55.880 CC lib/env_dpdk/threads.o 00:03:55.880 SYMLINK libspdk_json.so 00:03:55.880 CC lib/env_dpdk/pci_ioat.o 00:03:55.880 CC lib/env_dpdk/pci_virtio.o 00:03:55.880 CC lib/env_dpdk/pci_vmd.o 00:03:56.138 CC lib/env_dpdk/pci_idxd.o 00:03:56.138 LIB libspdk_idxd.a 00:03:56.138 CC lib/env_dpdk/pci_event.o 00:03:56.138 CC lib/env_dpdk/sigbus_handler.o 00:03:56.138 CC lib/env_dpdk/pci_dpdk.o 00:03:56.138 CC lib/jsonrpc/jsonrpc_server.o 00:03:56.138 SO libspdk_idxd.so.11.0 00:03:56.138 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:56.138 CC lib/vmd/led.o 00:03:56.138 SYMLINK libspdk_idxd.so 00:03:56.138 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:56.138 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:56.138 CC lib/jsonrpc/jsonrpc_client.o 00:03:56.138 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:56.396 LIB libspdk_vmd.a 00:03:56.396 SO libspdk_vmd.so.5.0 00:03:56.396 SYMLINK libspdk_vmd.so 00:03:56.396 LIB libspdk_jsonrpc.a 00:03:56.396 SO libspdk_jsonrpc.so.5.1 00:03:56.653 SYMLINK libspdk_jsonrpc.so 00:03:56.653 CC lib/rpc/rpc.o 00:03:56.912 LIB libspdk_rpc.a 00:03:56.912 LIB libspdk_env_dpdk.a 00:03:56.912 SO libspdk_rpc.so.5.0 00:03:57.169 SYMLINK libspdk_rpc.so 00:03:57.169 SO libspdk_env_dpdk.so.13.0 00:03:57.169 CC lib/notify/notify_rpc.o 00:03:57.169 CC lib/notify/notify.o 00:03:57.169 CC lib/trace/trace.o 00:03:57.169 CC lib/trace/trace_flags.o 00:03:57.169 CC lib/trace/trace_rpc.o 00:03:57.169 CC lib/sock/sock_rpc.o 00:03:57.169 CC lib/sock/sock.o 00:03:57.169 SYMLINK libspdk_env_dpdk.so 00:03:57.428 LIB libspdk_notify.a 00:03:57.428 SO libspdk_notify.so.5.0 00:03:57.428 SYMLINK libspdk_notify.so 00:03:57.428 LIB libspdk_trace.a 00:03:57.428 SO libspdk_trace.so.9.0 00:03:57.685 LIB libspdk_sock.a 00:03:57.685 SYMLINK libspdk_trace.so 00:03:57.685 SO libspdk_sock.so.8.0 00:03:57.685 SYMLINK libspdk_sock.so 00:03:57.685 CC lib/thread/thread.o 00:03:57.685 CC lib/thread/iobuf.o 00:03:57.943 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:57.943 CC lib/nvme/nvme_ctrlr.o 00:03:57.943 CC lib/nvme/nvme_fabric.o 00:03:57.943 CC lib/nvme/nvme_ns_cmd.o 00:03:57.943 CC lib/nvme/nvme_ns.o 00:03:57.943 CC lib/nvme/nvme_pcie.o 00:03:57.943 CC lib/nvme/nvme_pcie_common.o 00:03:57.943 CC lib/nvme/nvme_qpair.o 00:03:58.200 CC lib/nvme/nvme.o 00:03:58.764 CC lib/nvme/nvme_quirks.o 00:03:58.764 CC lib/nvme/nvme_transport.o 00:03:58.764 CC lib/nvme/nvme_discovery.o 00:03:58.764 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:58.764 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:58.764 CC lib/nvme/nvme_tcp.o 00:03:59.021 CC lib/nvme/nvme_opal.o 00:03:59.021 CC lib/nvme/nvme_io_msg.o 00:03:59.280 CC lib/nvme/nvme_poll_group.o 00:03:59.280 CC lib/nvme/nvme_zns.o 00:03:59.280 CC lib/nvme/nvme_cuse.o 00:03:59.280 CC lib/nvme/nvme_vfio_user.o 00:03:59.280 LIB libspdk_thread.a 00:03:59.280 SO libspdk_thread.so.9.0 00:03:59.538 CC lib/nvme/nvme_rdma.o 00:03:59.538 SYMLINK libspdk_thread.so 00:03:59.538 CC lib/accel/accel.o 00:03:59.538 CC lib/blob/blobstore.o 00:03:59.796 CC lib/blob/request.o 00:04:00.058 CC lib/blob/zeroes.o 00:04:00.058 CC lib/blob/blob_bs_dev.o 00:04:00.058 CC lib/accel/accel_rpc.o 00:04:00.058 CC lib/init/json_config.o 00:04:00.058 CC lib/virtio/virtio.o 00:04:00.320 CC lib/virtio/virtio_vhost_user.o 00:04:00.320 CC lib/virtio/virtio_vfio_user.o 00:04:00.320 CC lib/accel/accel_sw.o 00:04:00.320 CC lib/virtio/virtio_pci.o 00:04:00.320 CC lib/init/subsystem.o 00:04:00.320 CC lib/init/subsystem_rpc.o 00:04:00.320 CC lib/init/rpc.o 00:04:00.578 LIB libspdk_init.a 00:04:00.578 LIB libspdk_virtio.a 00:04:00.578 LIB libspdk_accel.a 00:04:00.578 SO libspdk_init.so.4.0 00:04:00.578 SO libspdk_virtio.so.6.0 00:04:00.578 SO libspdk_accel.so.14.0 00:04:00.837 SYMLINK libspdk_init.so 00:04:00.837 SYMLINK libspdk_virtio.so 00:04:00.837 SYMLINK libspdk_accel.so 00:04:00.837 LIB libspdk_nvme.a 00:04:00.837 CC lib/event/app.o 00:04:00.837 CC lib/event/log_rpc.o 00:04:00.837 CC lib/event/reactor.o 00:04:00.837 CC lib/event/scheduler_static.o 00:04:00.837 CC lib/event/app_rpc.o 00:04:00.837 CC lib/bdev/bdev_zone.o 00:04:00.837 CC lib/bdev/bdev_rpc.o 00:04:00.837 CC lib/bdev/bdev.o 00:04:01.096 SO libspdk_nvme.so.12.0 00:04:01.096 CC lib/bdev/part.o 00:04:01.096 CC lib/bdev/scsi_nvme.o 00:04:01.355 LIB libspdk_event.a 00:04:01.355 SO libspdk_event.so.12.0 00:04:01.355 SYMLINK libspdk_nvme.so 00:04:01.355 SYMLINK libspdk_event.so 00:04:02.729 LIB libspdk_blob.a 00:04:02.729 SO libspdk_blob.so.10.1 00:04:02.729 SYMLINK libspdk_blob.so 00:04:02.729 CC lib/lvol/lvol.o 00:04:02.729 CC lib/blobfs/blobfs.o 00:04:02.729 CC lib/blobfs/tree.o 00:04:03.663 LIB libspdk_bdev.a 00:04:03.663 LIB libspdk_blobfs.a 00:04:03.663 SO libspdk_bdev.so.14.0 00:04:03.663 SO libspdk_blobfs.so.9.0 00:04:03.663 SYMLINK libspdk_blobfs.so 00:04:03.663 LIB libspdk_lvol.a 00:04:03.663 SYMLINK libspdk_bdev.so 00:04:03.921 SO libspdk_lvol.so.9.1 00:04:03.921 SYMLINK libspdk_lvol.so 00:04:03.921 CC lib/nvmf/ctrlr.o 00:04:03.921 CC lib/nbd/nbd.o 00:04:03.921 CC lib/nvmf/ctrlr_discovery.o 00:04:03.921 CC lib/nbd/nbd_rpc.o 00:04:03.921 CC lib/ublk/ublk.o 00:04:03.921 CC lib/nvmf/ctrlr_bdev.o 00:04:03.921 CC lib/ublk/ublk_rpc.o 00:04:03.921 CC lib/scsi/dev.o 00:04:03.921 CC lib/nvmf/subsystem.o 00:04:03.921 CC lib/ftl/ftl_core.o 00:04:04.179 CC lib/ftl/ftl_init.o 00:04:04.179 CC lib/scsi/lun.o 00:04:04.179 CC lib/scsi/port.o 00:04:04.179 LIB libspdk_nbd.a 00:04:04.437 SO libspdk_nbd.so.6.0 00:04:04.437 CC lib/nvmf/nvmf.o 00:04:04.437 SYMLINK libspdk_nbd.so 00:04:04.437 CC lib/nvmf/nvmf_rpc.o 00:04:04.437 CC lib/ftl/ftl_layout.o 00:04:04.437 CC lib/ftl/ftl_debug.o 00:04:04.437 CC lib/ftl/ftl_io.o 00:04:04.437 CC lib/scsi/scsi.o 00:04:04.437 LIB libspdk_ublk.a 00:04:04.437 SO libspdk_ublk.so.2.0 00:04:04.695 CC lib/nvmf/transport.o 00:04:04.695 SYMLINK libspdk_ublk.so 00:04:04.695 CC lib/ftl/ftl_sb.o 00:04:04.695 CC lib/scsi/scsi_bdev.o 00:04:04.695 CC lib/ftl/ftl_l2p.o 00:04:04.695 CC lib/scsi/scsi_pr.o 00:04:04.695 CC lib/scsi/scsi_rpc.o 00:04:04.953 CC lib/ftl/ftl_l2p_flat.o 00:04:04.953 CC lib/ftl/ftl_nv_cache.o 00:04:04.953 CC lib/scsi/task.o 00:04:04.953 CC lib/ftl/ftl_band.o 00:04:04.953 CC lib/nvmf/tcp.o 00:04:05.211 CC lib/nvmf/rdma.o 00:04:05.211 CC lib/ftl/ftl_band_ops.o 00:04:05.211 LIB libspdk_scsi.a 00:04:05.211 CC lib/ftl/ftl_writer.o 00:04:05.211 SO libspdk_scsi.so.8.0 00:04:05.211 CC lib/ftl/ftl_rq.o 00:04:05.211 CC lib/ftl/ftl_reloc.o 00:04:05.211 SYMLINK libspdk_scsi.so 00:04:05.211 CC lib/ftl/ftl_l2p_cache.o 00:04:05.470 CC lib/ftl/ftl_p2l.o 00:04:05.470 CC lib/ftl/mngt/ftl_mngt.o 00:04:05.470 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:05.470 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:05.728 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:05.728 CC lib/iscsi/conn.o 00:04:05.728 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:05.728 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:05.728 CC lib/iscsi/init_grp.o 00:04:05.728 CC lib/vhost/vhost.o 00:04:05.728 CC lib/vhost/vhost_rpc.o 00:04:05.728 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:05.728 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:05.987 CC lib/iscsi/iscsi.o 00:04:05.987 CC lib/iscsi/md5.o 00:04:05.987 CC lib/vhost/vhost_scsi.o 00:04:05.987 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:05.987 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:06.246 CC lib/iscsi/param.o 00:04:06.246 CC lib/iscsi/portal_grp.o 00:04:06.246 CC lib/vhost/vhost_blk.o 00:04:06.246 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:06.504 CC lib/vhost/rte_vhost_user.o 00:04:06.504 CC lib/iscsi/tgt_node.o 00:04:06.504 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:06.505 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:06.505 CC lib/iscsi/iscsi_subsystem.o 00:04:06.505 CC lib/ftl/utils/ftl_conf.o 00:04:06.763 CC lib/iscsi/iscsi_rpc.o 00:04:06.763 CC lib/ftl/utils/ftl_md.o 00:04:06.763 CC lib/iscsi/task.o 00:04:06.763 CC lib/ftl/utils/ftl_mempool.o 00:04:07.021 CC lib/ftl/utils/ftl_bitmap.o 00:04:07.021 CC lib/ftl/utils/ftl_property.o 00:04:07.021 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:07.021 LIB libspdk_nvmf.a 00:04:07.021 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:07.021 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:07.021 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:07.279 SO libspdk_nvmf.so.17.0 00:04:07.279 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:07.279 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:07.279 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:07.279 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:07.279 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:07.279 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:07.279 CC lib/ftl/base/ftl_base_dev.o 00:04:07.279 LIB libspdk_iscsi.a 00:04:07.279 CC lib/ftl/base/ftl_base_bdev.o 00:04:07.279 SYMLINK libspdk_nvmf.so 00:04:07.279 CC lib/ftl/ftl_trace.o 00:04:07.537 SO libspdk_iscsi.so.7.0 00:04:07.537 LIB libspdk_vhost.a 00:04:07.537 SO libspdk_vhost.so.7.1 00:04:07.537 SYMLINK libspdk_iscsi.so 00:04:07.537 LIB libspdk_ftl.a 00:04:07.537 SYMLINK libspdk_vhost.so 00:04:07.796 SO libspdk_ftl.so.8.0 00:04:08.054 SYMLINK libspdk_ftl.so 00:04:08.312 CC module/env_dpdk/env_dpdk_rpc.o 00:04:08.312 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:08.312 CC module/blob/bdev/blob_bdev.o 00:04:08.312 CC module/accel/iaa/accel_iaa.o 00:04:08.312 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:08.312 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.312 CC module/accel/dsa/accel_dsa.o 00:04:08.312 CC module/accel/error/accel_error.o 00:04:08.312 CC module/accel/ioat/accel_ioat.o 00:04:08.312 CC module/sock/posix/posix.o 00:04:08.571 LIB libspdk_env_dpdk_rpc.a 00:04:08.571 SO libspdk_env_dpdk_rpc.so.5.0 00:04:08.571 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.571 LIB libspdk_scheduler_gscheduler.a 00:04:08.571 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:08.571 SO libspdk_scheduler_gscheduler.so.3.0 00:04:08.571 SYMLINK libspdk_env_dpdk_rpc.so 00:04:08.571 CC module/accel/error/accel_error_rpc.o 00:04:08.571 CC module/accel/ioat/accel_ioat_rpc.o 00:04:08.571 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.571 LIB libspdk_scheduler_dynamic.a 00:04:08.571 CC module/accel/dsa/accel_dsa_rpc.o 00:04:08.571 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.571 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.571 SO libspdk_scheduler_dynamic.so.3.0 00:04:08.571 LIB libspdk_blob_bdev.a 00:04:08.835 SO libspdk_blob_bdev.so.10.1 00:04:08.835 SYMLINK libspdk_scheduler_dynamic.so 00:04:08.835 LIB libspdk_accel_ioat.a 00:04:08.835 LIB libspdk_accel_error.a 00:04:08.835 LIB libspdk_accel_dsa.a 00:04:08.835 SYMLINK libspdk_blob_bdev.so 00:04:08.835 LIB libspdk_accel_iaa.a 00:04:08.835 SO libspdk_accel_ioat.so.5.0 00:04:08.835 SO libspdk_accel_error.so.1.0 00:04:08.835 SO libspdk_accel_iaa.so.2.0 00:04:08.835 SO libspdk_accel_dsa.so.4.0 00:04:08.835 SYMLINK libspdk_accel_ioat.so 00:04:08.835 SYMLINK libspdk_accel_error.so 00:04:08.835 SYMLINK libspdk_accel_iaa.so 00:04:08.835 SYMLINK libspdk_accel_dsa.so 00:04:08.835 CC module/bdev/gpt/gpt.o 00:04:08.835 CC module/bdev/delay/vbdev_delay.o 00:04:08.835 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.835 CC module/bdev/error/vbdev_error.o 00:04:08.835 CC module/bdev/lvol/vbdev_lvol.o 00:04:09.113 CC module/bdev/nvme/bdev_nvme.o 00:04:09.113 CC module/bdev/null/bdev_null.o 00:04:09.113 CC module/bdev/malloc/bdev_malloc.o 00:04:09.113 CC module/bdev/passthru/vbdev_passthru.o 00:04:09.113 LIB libspdk_sock_posix.a 00:04:09.113 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:09.113 CC module/bdev/gpt/vbdev_gpt.o 00:04:09.113 SO libspdk_sock_posix.so.5.0 00:04:09.113 CC module/bdev/error/vbdev_error_rpc.o 00:04:09.113 CC module/bdev/null/bdev_null_rpc.o 00:04:09.113 SYMLINK libspdk_sock_posix.so 00:04:09.113 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:09.371 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:09.371 LIB libspdk_blobfs_bdev.a 00:04:09.371 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:09.371 SO libspdk_blobfs_bdev.so.5.0 00:04:09.371 LIB libspdk_bdev_error.a 00:04:09.371 LIB libspdk_bdev_null.a 00:04:09.371 LIB libspdk_bdev_passthru.a 00:04:09.371 SYMLINK libspdk_blobfs_bdev.so 00:04:09.371 SO libspdk_bdev_error.so.5.0 00:04:09.371 LIB libspdk_bdev_gpt.a 00:04:09.371 SO libspdk_bdev_null.so.5.0 00:04:09.371 SO libspdk_bdev_passthru.so.5.0 00:04:09.371 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:09.371 CC module/bdev/raid/bdev_raid.o 00:04:09.371 LIB libspdk_bdev_malloc.a 00:04:09.371 SO libspdk_bdev_gpt.so.5.0 00:04:09.371 LIB libspdk_bdev_delay.a 00:04:09.371 SYMLINK libspdk_bdev_error.so 00:04:09.630 SO libspdk_bdev_malloc.so.5.0 00:04:09.630 SYMLINK libspdk_bdev_passthru.so 00:04:09.631 SO libspdk_bdev_delay.so.5.0 00:04:09.631 SYMLINK libspdk_bdev_null.so 00:04:09.631 CC module/bdev/raid/bdev_raid_rpc.o 00:04:09.631 SYMLINK libspdk_bdev_gpt.so 00:04:09.631 CC module/bdev/split/vbdev_split.o 00:04:09.631 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.631 SYMLINK libspdk_bdev_malloc.so 00:04:09.631 CC module/bdev/raid/raid0.o 00:04:09.631 SYMLINK libspdk_bdev_delay.so 00:04:09.631 CC module/bdev/raid/raid1.o 00:04:09.631 CC module/bdev/aio/bdev_aio.o 00:04:09.631 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:09.889 CC module/bdev/split/vbdev_split_rpc.o 00:04:09.889 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.889 LIB libspdk_bdev_lvol.a 00:04:09.889 SO libspdk_bdev_lvol.so.5.0 00:04:09.889 CC module/bdev/raid/concat.o 00:04:09.889 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:09.889 SYMLINK libspdk_bdev_lvol.so 00:04:09.889 LIB libspdk_bdev_split.a 00:04:09.889 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.889 CC module/bdev/ftl/bdev_ftl.o 00:04:09.889 LIB libspdk_bdev_aio.a 00:04:09.889 SO libspdk_bdev_split.so.5.0 00:04:09.889 SO libspdk_bdev_aio.so.5.0 00:04:10.148 CC module/bdev/iscsi/bdev_iscsi.o 00:04:10.148 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:10.148 SYMLINK libspdk_bdev_split.so 00:04:10.148 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:10.148 CC module/bdev/nvme/nvme_rpc.o 00:04:10.148 SYMLINK libspdk_bdev_aio.so 00:04:10.148 LIB libspdk_bdev_zone_block.a 00:04:10.148 SO libspdk_bdev_zone_block.so.5.0 00:04:10.148 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:10.148 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.148 SYMLINK libspdk_bdev_zone_block.so 00:04:10.148 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.148 CC module/bdev/nvme/bdev_mdns_client.o 00:04:10.407 LIB libspdk_bdev_raid.a 00:04:10.407 LIB libspdk_bdev_ftl.a 00:04:10.407 CC module/bdev/nvme/vbdev_opal.o 00:04:10.407 SO libspdk_bdev_ftl.so.5.0 00:04:10.407 SO libspdk_bdev_raid.so.5.0 00:04:10.407 LIB libspdk_bdev_iscsi.a 00:04:10.407 SYMLINK libspdk_bdev_ftl.so 00:04:10.407 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:10.407 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:10.407 SO libspdk_bdev_iscsi.so.5.0 00:04:10.407 SYMLINK libspdk_bdev_raid.so 00:04:10.407 SYMLINK libspdk_bdev_iscsi.so 00:04:10.666 LIB libspdk_bdev_virtio.a 00:04:10.666 SO libspdk_bdev_virtio.so.5.0 00:04:10.925 SYMLINK libspdk_bdev_virtio.so 00:04:10.925 LIB libspdk_bdev_nvme.a 00:04:11.184 SO libspdk_bdev_nvme.so.6.0 00:04:11.184 SYMLINK libspdk_bdev_nvme.so 00:04:11.442 CC module/event/subsystems/vmd/vmd.o 00:04:11.442 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.442 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.442 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.442 CC module/event/subsystems/sock/sock.o 00:04:11.442 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.442 CC module/event/subsystems/scheduler/scheduler.o 00:04:11.701 LIB libspdk_event_vhost_blk.a 00:04:11.701 LIB libspdk_event_sock.a 00:04:11.701 LIB libspdk_event_vmd.a 00:04:11.701 SO libspdk_event_vhost_blk.so.2.0 00:04:11.701 SO libspdk_event_sock.so.4.0 00:04:11.701 LIB libspdk_event_scheduler.a 00:04:11.701 SO libspdk_event_vmd.so.5.0 00:04:11.701 LIB libspdk_event_iobuf.a 00:04:11.701 SO libspdk_event_scheduler.so.3.0 00:04:11.701 SYMLINK libspdk_event_sock.so 00:04:11.701 SO libspdk_event_iobuf.so.2.0 00:04:11.701 SYMLINK libspdk_event_vhost_blk.so 00:04:11.701 SYMLINK libspdk_event_vmd.so 00:04:11.701 SYMLINK libspdk_event_scheduler.so 00:04:11.701 SYMLINK libspdk_event_iobuf.so 00:04:11.964 CC module/event/subsystems/accel/accel.o 00:04:12.225 LIB libspdk_event_accel.a 00:04:12.225 SO libspdk_event_accel.so.5.0 00:04:12.225 SYMLINK libspdk_event_accel.so 00:04:12.483 CC module/event/subsystems/bdev/bdev.o 00:04:12.741 LIB libspdk_event_bdev.a 00:04:12.741 SO libspdk_event_bdev.so.5.0 00:04:12.741 SYMLINK libspdk_event_bdev.so 00:04:12.998 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.998 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:12.998 CC module/event/subsystems/ublk/ublk.o 00:04:12.998 CC module/event/subsystems/scsi/scsi.o 00:04:12.998 CC module/event/subsystems/nbd/nbd.o 00:04:12.998 LIB libspdk_event_nbd.a 00:04:12.998 LIB libspdk_event_ublk.a 00:04:12.998 LIB libspdk_event_scsi.a 00:04:12.998 SO libspdk_event_nbd.so.5.0 00:04:12.998 SO libspdk_event_ublk.so.2.0 00:04:12.998 SO libspdk_event_scsi.so.5.0 00:04:13.255 LIB libspdk_event_nvmf.a 00:04:13.255 SYMLINK libspdk_event_nbd.so 00:04:13.255 SYMLINK libspdk_event_ublk.so 00:04:13.255 SYMLINK libspdk_event_scsi.so 00:04:13.255 SO libspdk_event_nvmf.so.5.0 00:04:13.255 SYMLINK libspdk_event_nvmf.so 00:04:13.255 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.255 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.513 LIB libspdk_event_vhost_scsi.a 00:04:13.513 LIB libspdk_event_iscsi.a 00:04:13.513 SO libspdk_event_vhost_scsi.so.2.0 00:04:13.513 SO libspdk_event_iscsi.so.5.0 00:04:13.513 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.775 SYMLINK libspdk_event_iscsi.so 00:04:13.775 SO libspdk.so.5.0 00:04:13.775 SYMLINK libspdk.so 00:04:14.033 CC app/trace_record/trace_record.o 00:04:14.033 CXX app/trace/trace.o 00:04:14.033 CC app/spdk_lspci/spdk_lspci.o 00:04:14.033 CC app/nvmf_tgt/nvmf_main.o 00:04:14.033 CC examples/accel/perf/accel_perf.o 00:04:14.033 CC app/spdk_tgt/spdk_tgt.o 00:04:14.033 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.033 CC test/bdev/bdevio/bdevio.o 00:04:14.033 CC test/app/bdev_svc/bdev_svc.o 00:04:14.033 CC test/accel/dif/dif.o 00:04:14.033 LINK spdk_lspci 00:04:14.291 LINK nvmf_tgt 00:04:14.291 LINK spdk_trace_record 00:04:14.291 LINK bdev_svc 00:04:14.291 LINK iscsi_tgt 00:04:14.291 LINK spdk_tgt 00:04:14.291 LINK spdk_trace 00:04:14.291 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.291 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:14.548 LINK dif 00:04:14.548 LINK bdevio 00:04:14.548 CC app/spdk_nvme_perf/perf.o 00:04:14.548 LINK accel_perf 00:04:14.548 CC app/spdk_nvme_identify/identify.o 00:04:14.548 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.548 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.548 CC test/blobfs/mkfs/mkfs.o 00:04:14.807 TEST_HEADER include/spdk/accel.h 00:04:14.807 TEST_HEADER include/spdk/accel_module.h 00:04:14.807 TEST_HEADER include/spdk/assert.h 00:04:14.807 TEST_HEADER include/spdk/barrier.h 00:04:14.807 TEST_HEADER include/spdk/base64.h 00:04:14.807 TEST_HEADER include/spdk/bdev.h 00:04:14.807 TEST_HEADER include/spdk/bdev_module.h 00:04:14.807 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.807 TEST_HEADER include/spdk/bit_array.h 00:04:14.807 TEST_HEADER include/spdk/bit_pool.h 00:04:14.807 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.807 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.807 TEST_HEADER include/spdk/blobfs.h 00:04:14.807 TEST_HEADER include/spdk/blob.h 00:04:14.807 TEST_HEADER include/spdk/conf.h 00:04:14.807 TEST_HEADER include/spdk/config.h 00:04:14.807 TEST_HEADER include/spdk/cpuset.h 00:04:14.807 TEST_HEADER include/spdk/crc16.h 00:04:14.807 TEST_HEADER include/spdk/crc32.h 00:04:14.807 TEST_HEADER include/spdk/crc64.h 00:04:14.807 TEST_HEADER include/spdk/dif.h 00:04:14.807 TEST_HEADER include/spdk/dma.h 00:04:14.807 TEST_HEADER include/spdk/endian.h 00:04:14.807 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.807 TEST_HEADER include/spdk/env.h 00:04:14.807 TEST_HEADER include/spdk/event.h 00:04:14.807 TEST_HEADER include/spdk/fd_group.h 00:04:14.807 TEST_HEADER include/spdk/fd.h 00:04:14.807 TEST_HEADER include/spdk/file.h 00:04:14.807 TEST_HEADER include/spdk/ftl.h 00:04:14.807 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.807 TEST_HEADER include/spdk/hexlify.h 00:04:14.807 TEST_HEADER include/spdk/histogram_data.h 00:04:14.807 CC examples/blob/hello_world/hello_blob.o 00:04:14.807 TEST_HEADER include/spdk/idxd.h 00:04:14.807 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.807 TEST_HEADER include/spdk/init.h 00:04:14.807 TEST_HEADER include/spdk/ioat.h 00:04:14.807 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.807 LINK nvme_fuzz 00:04:14.808 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.808 TEST_HEADER include/spdk/json.h 00:04:14.808 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.808 TEST_HEADER include/spdk/likely.h 00:04:14.808 TEST_HEADER include/spdk/log.h 00:04:14.808 TEST_HEADER include/spdk/lvol.h 00:04:14.808 TEST_HEADER include/spdk/memory.h 00:04:14.808 TEST_HEADER include/spdk/mmio.h 00:04:14.808 TEST_HEADER include/spdk/nbd.h 00:04:14.808 CC test/dma/test_dma/test_dma.o 00:04:14.808 TEST_HEADER include/spdk/notify.h 00:04:14.808 TEST_HEADER include/spdk/nvme.h 00:04:14.808 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.808 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.808 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.808 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.808 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.808 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.808 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.808 TEST_HEADER include/spdk/nvmf.h 00:04:14.808 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.808 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.808 TEST_HEADER include/spdk/opal.h 00:04:14.808 TEST_HEADER include/spdk/opal_spec.h 00:04:14.808 TEST_HEADER include/spdk/pci_ids.h 00:04:14.808 TEST_HEADER include/spdk/pipe.h 00:04:14.808 TEST_HEADER include/spdk/queue.h 00:04:14.808 TEST_HEADER include/spdk/reduce.h 00:04:14.808 TEST_HEADER include/spdk/rpc.h 00:04:14.808 TEST_HEADER include/spdk/scheduler.h 00:04:14.808 TEST_HEADER include/spdk/scsi.h 00:04:14.808 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.808 LINK mkfs 00:04:14.808 TEST_HEADER include/spdk/sock.h 00:04:14.808 TEST_HEADER include/spdk/stdinc.h 00:04:14.808 TEST_HEADER include/spdk/string.h 00:04:14.808 LINK hello_bdev 00:04:14.808 TEST_HEADER include/spdk/thread.h 00:04:14.808 TEST_HEADER include/spdk/trace.h 00:04:14.808 TEST_HEADER include/spdk/trace_parser.h 00:04:14.808 TEST_HEADER include/spdk/tree.h 00:04:14.808 TEST_HEADER include/spdk/ublk.h 00:04:14.808 TEST_HEADER include/spdk/util.h 00:04:14.808 TEST_HEADER include/spdk/uuid.h 00:04:14.808 TEST_HEADER include/spdk/version.h 00:04:14.808 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.808 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.808 TEST_HEADER include/spdk/vhost.h 00:04:14.808 TEST_HEADER include/spdk/vmd.h 00:04:14.808 TEST_HEADER include/spdk/xor.h 00:04:14.808 TEST_HEADER include/spdk/zipf.h 00:04:14.808 CXX test/cpp_headers/accel.o 00:04:15.066 LINK hello_blob 00:04:15.066 CC examples/blob/cli/blobcli.o 00:04:15.066 CXX test/cpp_headers/accel_module.o 00:04:15.066 CC examples/nvme/hello_world/hello_world.o 00:04:15.066 CC examples/ioat/perf/perf.o 00:04:15.324 LINK test_dma 00:04:15.324 CXX test/cpp_headers/assert.o 00:04:15.324 LINK spdk_nvme_perf 00:04:15.324 CC app/spdk_nvme_discover/discovery_aer.o 00:04:15.324 LINK spdk_nvme_identify 00:04:15.324 LINK bdevperf 00:04:15.324 LINK hello_world 00:04:15.324 LINK ioat_perf 00:04:15.324 CXX test/cpp_headers/barrier.o 00:04:15.582 CC test/app/histogram_perf/histogram_perf.o 00:04:15.582 CC examples/ioat/verify/verify.o 00:04:15.582 LINK spdk_nvme_discover 00:04:15.582 LINK blobcli 00:04:15.582 CC test/app/jsoncat/jsoncat.o 00:04:15.582 CXX test/cpp_headers/base64.o 00:04:15.582 CC test/app/stub/stub.o 00:04:15.582 CXX test/cpp_headers/bdev.o 00:04:15.582 CC examples/nvme/reconnect/reconnect.o 00:04:15.582 LINK histogram_perf 00:04:15.582 CC app/spdk_top/spdk_top.o 00:04:15.840 LINK verify 00:04:15.840 CXX test/cpp_headers/bdev_module.o 00:04:15.840 LINK jsoncat 00:04:15.840 LINK stub 00:04:15.840 CC app/vhost/vhost.o 00:04:15.840 CC app/spdk_dd/spdk_dd.o 00:04:15.840 CXX test/cpp_headers/bdev_zone.o 00:04:15.840 CC app/fio/nvme/fio_plugin.o 00:04:15.840 CC app/fio/bdev/fio_plugin.o 00:04:16.098 CC examples/sock/hello_world/hello_sock.o 00:04:16.098 LINK reconnect 00:04:16.098 LINK iscsi_fuzz 00:04:16.098 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.098 LINK vhost 00:04:16.098 CXX test/cpp_headers/bit_array.o 00:04:16.098 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:16.098 LINK lsvmd 00:04:16.098 LINK hello_sock 00:04:16.356 LINK spdk_dd 00:04:16.356 CXX test/cpp_headers/bit_pool.o 00:04:16.356 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:16.356 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:16.356 CXX test/cpp_headers/blob_bdev.o 00:04:16.356 CC examples/vmd/led/led.o 00:04:16.356 LINK spdk_bdev 00:04:16.356 LINK spdk_nvme 00:04:16.614 CC examples/nvme/arbitration/arbitration.o 00:04:16.614 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:16.614 CC examples/nvme/hotplug/hotplug.o 00:04:16.614 LINK spdk_top 00:04:16.614 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.614 CXX test/cpp_headers/blobfs.o 00:04:16.614 LINK led 00:04:16.614 CC examples/nvme/abort/abort.o 00:04:16.614 CXX test/cpp_headers/blob.o 00:04:16.614 LINK cmb_copy 00:04:16.614 LINK nvme_manage 00:04:16.614 LINK vhost_fuzz 00:04:16.873 LINK hotplug 00:04:16.873 LINK arbitration 00:04:16.873 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.873 CXX test/cpp_headers/conf.o 00:04:16.873 CXX test/cpp_headers/config.o 00:04:16.873 CC examples/nvmf/nvmf/nvmf.o 00:04:16.873 CC test/env/vtophys/vtophys.o 00:04:16.873 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.873 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.873 CC test/event/event_perf/event_perf.o 00:04:16.873 CC test/env/memory/memory_ut.o 00:04:17.131 CXX test/cpp_headers/cpuset.o 00:04:17.131 LINK pmr_persistence 00:04:17.131 LINK abort 00:04:17.131 CC test/env/pci/pci_ut.o 00:04:17.131 LINK vtophys 00:04:17.131 LINK env_dpdk_post_init 00:04:17.131 LINK event_perf 00:04:17.131 CXX test/cpp_headers/crc16.o 00:04:17.131 LINK mem_callbacks 00:04:17.131 CXX test/cpp_headers/crc32.o 00:04:17.131 LINK nvmf 00:04:17.389 CC examples/util/zipf/zipf.o 00:04:17.389 CC test/event/reactor/reactor.o 00:04:17.389 CC examples/thread/thread/thread_ex.o 00:04:17.389 CC examples/idxd/perf/perf.o 00:04:17.389 CXX test/cpp_headers/crc64.o 00:04:17.389 LINK pci_ut 00:04:17.389 CC test/nvme/aer/aer.o 00:04:17.389 LINK zipf 00:04:17.389 CC test/lvol/esnap/esnap.o 00:04:17.389 LINK memory_ut 00:04:17.389 LINK reactor 00:04:17.647 CC test/nvme/reset/reset.o 00:04:17.647 CXX test/cpp_headers/dif.o 00:04:17.647 CXX test/cpp_headers/dma.o 00:04:17.647 LINK thread 00:04:17.647 CC test/event/reactor_perf/reactor_perf.o 00:04:17.647 LINK idxd_perf 00:04:17.647 LINK aer 00:04:17.904 CXX test/cpp_headers/endian.o 00:04:17.904 LINK reset 00:04:17.904 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.904 CC test/rpc_client/rpc_client_test.o 00:04:17.904 LINK reactor_perf 00:04:17.904 CC test/nvme/sgl/sgl.o 00:04:17.904 CC test/nvme/e2edp/nvme_dp.o 00:04:17.904 CC test/event/app_repeat/app_repeat.o 00:04:17.904 CXX test/cpp_headers/env_dpdk.o 00:04:17.904 LINK interrupt_tgt 00:04:17.904 LINK rpc_client_test 00:04:18.161 CC test/event/scheduler/scheduler.o 00:04:18.161 LINK app_repeat 00:04:18.161 CXX test/cpp_headers/env.o 00:04:18.161 CC test/thread/poller_perf/poller_perf.o 00:04:18.161 CXX test/cpp_headers/event.o 00:04:18.161 CXX test/cpp_headers/fd_group.o 00:04:18.161 LINK sgl 00:04:18.161 LINK nvme_dp 00:04:18.161 LINK scheduler 00:04:18.419 LINK poller_perf 00:04:18.419 CXX test/cpp_headers/fd.o 00:04:18.419 CC test/nvme/overhead/overhead.o 00:04:18.419 CC test/nvme/err_injection/err_injection.o 00:04:18.419 CC test/nvme/startup/startup.o 00:04:18.419 CC test/nvme/reserve/reserve.o 00:04:18.419 CC test/nvme/simple_copy/simple_copy.o 00:04:18.419 CC test/nvme/connect_stress/connect_stress.o 00:04:18.419 CC test/nvme/boot_partition/boot_partition.o 00:04:18.419 CXX test/cpp_headers/file.o 00:04:18.677 LINK startup 00:04:18.677 LINK err_injection 00:04:18.677 LINK reserve 00:04:18.677 LINK connect_stress 00:04:18.677 LINK overhead 00:04:18.677 LINK boot_partition 00:04:18.677 LINK simple_copy 00:04:18.677 CXX test/cpp_headers/ftl.o 00:04:18.677 CXX test/cpp_headers/gpt_spec.o 00:04:18.677 CXX test/cpp_headers/hexlify.o 00:04:18.935 CC test/nvme/compliance/nvme_compliance.o 00:04:18.935 CC test/nvme/fused_ordering/fused_ordering.o 00:04:18.935 CXX test/cpp_headers/histogram_data.o 00:04:18.935 CC test/nvme/fdp/fdp.o 00:04:18.935 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:18.935 CXX test/cpp_headers/idxd.o 00:04:18.935 CXX test/cpp_headers/idxd_spec.o 00:04:19.193 CC test/nvme/cuse/cuse.o 00:04:19.193 CXX test/cpp_headers/init.o 00:04:19.193 LINK fused_ordering 00:04:19.193 LINK doorbell_aers 00:04:19.193 CXX test/cpp_headers/ioat.o 00:04:19.193 LINK nvme_compliance 00:04:19.193 CXX test/cpp_headers/ioat_spec.o 00:04:19.193 CXX test/cpp_headers/iscsi_spec.o 00:04:19.193 CXX test/cpp_headers/json.o 00:04:19.193 CXX test/cpp_headers/jsonrpc.o 00:04:19.451 LINK fdp 00:04:19.451 CXX test/cpp_headers/likely.o 00:04:19.451 CXX test/cpp_headers/log.o 00:04:19.451 CXX test/cpp_headers/lvol.o 00:04:19.451 CXX test/cpp_headers/memory.o 00:04:19.451 CXX test/cpp_headers/mmio.o 00:04:19.451 CXX test/cpp_headers/nbd.o 00:04:19.451 CXX test/cpp_headers/notify.o 00:04:19.451 CXX test/cpp_headers/nvme.o 00:04:19.451 CXX test/cpp_headers/nvme_intel.o 00:04:19.710 CXX test/cpp_headers/nvme_ocssd.o 00:04:19.710 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:19.710 CXX test/cpp_headers/nvme_spec.o 00:04:19.710 CXX test/cpp_headers/nvme_zns.o 00:04:19.710 CXX test/cpp_headers/nvmf_cmd.o 00:04:19.710 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:19.710 CXX test/cpp_headers/nvmf.o 00:04:19.710 CXX test/cpp_headers/nvmf_spec.o 00:04:19.710 CXX test/cpp_headers/nvmf_transport.o 00:04:19.968 CXX test/cpp_headers/opal.o 00:04:19.968 CXX test/cpp_headers/opal_spec.o 00:04:19.968 CXX test/cpp_headers/pci_ids.o 00:04:19.968 CXX test/cpp_headers/pipe.o 00:04:19.968 CXX test/cpp_headers/queue.o 00:04:19.968 CXX test/cpp_headers/reduce.o 00:04:19.968 CXX test/cpp_headers/rpc.o 00:04:19.968 CXX test/cpp_headers/scheduler.o 00:04:19.968 CXX test/cpp_headers/scsi.o 00:04:19.968 CXX test/cpp_headers/scsi_spec.o 00:04:19.968 CXX test/cpp_headers/sock.o 00:04:20.225 CXX test/cpp_headers/stdinc.o 00:04:20.225 LINK cuse 00:04:20.225 CXX test/cpp_headers/string.o 00:04:20.225 CXX test/cpp_headers/thread.o 00:04:20.225 CXX test/cpp_headers/trace.o 00:04:20.225 CXX test/cpp_headers/trace_parser.o 00:04:20.225 CXX test/cpp_headers/tree.o 00:04:20.225 CXX test/cpp_headers/ublk.o 00:04:20.483 CXX test/cpp_headers/util.o 00:04:20.483 CXX test/cpp_headers/uuid.o 00:04:20.483 CXX test/cpp_headers/version.o 00:04:20.483 CXX test/cpp_headers/vfio_user_pci.o 00:04:20.483 CXX test/cpp_headers/vfio_user_spec.o 00:04:20.483 CXX test/cpp_headers/vhost.o 00:04:20.483 CXX test/cpp_headers/vmd.o 00:04:20.483 CXX test/cpp_headers/xor.o 00:04:20.483 CXX test/cpp_headers/zipf.o 00:04:22.380 LINK esnap 00:04:24.908 00:04:24.908 real 0m56.515s 00:04:24.908 user 5m20.161s 00:04:24.908 sys 1m5.633s 00:04:24.908 02:06:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:24.908 02:06:23 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.908 ************************************ 00:04:24.908 END TEST make 00:04:24.908 ************************************ 00:04:24.908 02:06:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.908 02:06:24 -- nvmf/common.sh@7 -- # uname -s 00:04:24.908 02:06:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.908 02:06:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.908 02:06:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.908 02:06:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.908 02:06:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.908 02:06:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.908 02:06:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.908 02:06:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.908 02:06:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.908 02:06:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.908 02:06:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:04:24.908 02:06:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:04:24.908 02:06:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.908 02:06:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.908 02:06:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:24.908 02:06:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.908 02:06:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.908 02:06:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.908 02:06:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.908 02:06:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.908 02:06:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.908 02:06:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.908 02:06:24 -- paths/export.sh@5 -- # export PATH 00:04:24.908 02:06:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.908 02:06:24 -- nvmf/common.sh@46 -- # : 0 00:04:24.908 02:06:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:24.908 02:06:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:24.908 02:06:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:24.908 02:06:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.908 02:06:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.908 02:06:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:24.908 02:06:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:24.908 02:06:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:24.908 02:06:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:24.908 02:06:24 -- spdk/autotest.sh@32 -- # uname -s 00:04:24.908 02:06:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:24.908 02:06:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:24.908 02:06:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.908 02:06:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:24.908 02:06:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:24.908 02:06:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:24.908 02:06:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:24.908 02:06:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:24.908 02:06:24 -- spdk/autotest.sh@48 -- # udevadm_pid=61382 00:04:24.908 02:06:24 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.908 02:06:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:24.908 02:06:24 -- spdk/autotest.sh@54 -- # echo 61395 00:04:24.908 02:06:24 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.908 02:06:24 -- spdk/autotest.sh@56 -- # echo 61399 00:04:24.908 02:06:24 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:24.908 02:06:24 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:24.908 02:06:24 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:24.908 02:06:24 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:24.908 02:06:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.908 02:06:24 -- common/autotest_common.sh@10 -- # set +x 00:04:24.908 02:06:24 -- spdk/autotest.sh@70 -- # create_test_list 00:04:24.908 02:06:24 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:24.908 02:06:24 -- common/autotest_common.sh@10 -- # set +x 00:04:24.908 02:06:24 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:24.908 02:06:24 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:24.908 02:06:24 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:24.908 02:06:24 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:24.908 02:06:24 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:24.908 02:06:24 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:24.908 02:06:24 -- common/autotest_common.sh@1440 -- # uname 00:04:24.908 02:06:24 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:24.908 02:06:24 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:24.908 02:06:24 -- common/autotest_common.sh@1460 -- # uname 00:04:24.908 02:06:24 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:24.908 02:06:24 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:24.908 02:06:24 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:24.908 02:06:24 -- spdk/autotest.sh@83 -- # hash lcov 00:04:24.908 02:06:24 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:24.908 02:06:24 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:24.908 --rc lcov_branch_coverage=1 00:04:24.908 --rc lcov_function_coverage=1 00:04:24.908 --rc genhtml_branch_coverage=1 00:04:24.908 --rc genhtml_function_coverage=1 00:04:24.908 --rc genhtml_legend=1 00:04:24.908 --rc geninfo_all_blocks=1 00:04:24.908 ' 00:04:24.908 02:06:24 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:24.908 --rc lcov_branch_coverage=1 00:04:24.908 --rc lcov_function_coverage=1 00:04:24.908 --rc genhtml_branch_coverage=1 00:04:24.908 --rc genhtml_function_coverage=1 00:04:24.908 --rc genhtml_legend=1 00:04:24.908 --rc geninfo_all_blocks=1 00:04:24.908 ' 00:04:24.908 02:06:24 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:24.908 --rc lcov_branch_coverage=1 00:04:24.908 --rc lcov_function_coverage=1 00:04:24.908 --rc genhtml_branch_coverage=1 00:04:24.908 --rc genhtml_function_coverage=1 00:04:24.908 --rc genhtml_legend=1 00:04:24.908 --rc geninfo_all_blocks=1 00:04:24.908 --no-external' 00:04:24.908 02:06:24 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:24.908 --rc lcov_branch_coverage=1 00:04:24.908 --rc lcov_function_coverage=1 00:04:24.908 --rc genhtml_branch_coverage=1 00:04:24.908 --rc genhtml_function_coverage=1 00:04:24.908 --rc genhtml_legend=1 00:04:24.908 --rc geninfo_all_blocks=1 00:04:24.908 --no-external' 00:04:24.908 02:06:24 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:24.908 lcov: LCOV version 1.14 00:04:24.909 02:06:24 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:33.022 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:33.022 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:33.022 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:33.022 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:33.022 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:33.022 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:51.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:51.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:51.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:51.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:51.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:51.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:51.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:51.107 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:51.107 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:51.108 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:51.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:51.109 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:51.109 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:51.368 02:06:50 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:51.368 02:06:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.368 02:06:50 -- common/autotest_common.sh@10 -- # set +x 00:04:51.368 02:06:50 -- spdk/autotest.sh@102 -- # rm -f 00:04:51.368 02:06:50 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.195 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:52.195 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:52.195 02:06:51 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:52.195 02:06:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:52.195 02:06:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:52.195 02:06:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:52.195 02:06:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:52.195 02:06:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:52.195 02:06:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:52.195 02:06:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:52.195 02:06:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n2 00:04:52.195 02:06:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n2 00:04:52.195 02:06:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:52.195 02:06:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n3 00:04:52.195 02:06:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n3 00:04:52.195 02:06:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:52.195 02:06:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:52.195 02:06:51 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:52.195 02:06:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.195 02:06:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:52.195 02:06:51 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:52.195 02:06:51 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme1n1 00:04:52.195 02:06:51 -- spdk/autotest.sh@121 -- # grep -v p 00:04:52.195 02:06:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:52.195 02:06:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:52.195 02:06:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:52.195 02:06:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:52.195 02:06:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:52.195 No valid GPT data, bailing 00:04:52.195 02:06:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.195 02:06:51 -- scripts/common.sh@393 -- # pt= 00:04:52.195 02:06:51 -- scripts/common.sh@394 -- # return 1 00:04:52.195 02:06:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:52.195 1+0 records in 00:04:52.195 1+0 records out 00:04:52.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497891 s, 211 MB/s 00:04:52.195 02:06:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:52.195 02:06:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:52.195 02:06:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n2 00:04:52.195 02:06:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n2 pt 00:04:52.195 02:06:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:52.195 No valid GPT data, bailing 00:04:52.195 02:06:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:52.195 02:06:51 -- scripts/common.sh@393 -- # pt= 00:04:52.195 02:06:51 -- scripts/common.sh@394 -- # return 1 00:04:52.195 02:06:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:52.195 1+0 records in 00:04:52.195 1+0 records out 00:04:52.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415951 s, 252 MB/s 00:04:52.195 02:06:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:52.195 02:06:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:52.195 02:06:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n3 00:04:52.195 02:06:51 -- scripts/common.sh@380 -- # local block=/dev/nvme0n3 pt 00:04:52.195 02:06:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:52.454 No valid GPT data, bailing 00:04:52.454 02:06:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:52.454 02:06:51 -- scripts/common.sh@393 -- # pt= 00:04:52.454 02:06:51 -- scripts/common.sh@394 -- # return 1 00:04:52.454 02:06:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:52.454 1+0 records in 00:04:52.454 1+0 records out 00:04:52.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454151 s, 231 MB/s 00:04:52.454 02:06:51 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:52.454 02:06:51 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:52.454 02:06:51 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:52.454 02:06:51 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:52.454 02:06:51 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:52.454 No valid GPT data, bailing 00:04:52.454 02:06:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.454 02:06:51 -- scripts/common.sh@393 -- # pt= 00:04:52.454 02:06:51 -- scripts/common.sh@394 -- # return 1 00:04:52.454 02:06:51 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:52.454 1+0 records in 00:04:52.454 1+0 records out 00:04:52.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406839 s, 258 MB/s 00:04:52.454 02:06:51 -- spdk/autotest.sh@129 -- # sync 00:04:52.454 02:06:51 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:52.454 02:06:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:52.454 02:06:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:54.374 02:06:53 -- spdk/autotest.sh@135 -- # uname -s 00:04:54.374 02:06:53 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:54.374 02:06:53 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:54.374 02:06:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.374 02:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.374 02:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 ************************************ 00:04:54.374 START TEST setup.sh 00:04:54.374 ************************************ 00:04:54.374 02:06:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:54.374 * Looking for test storage... 00:04:54.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.374 02:06:53 -- setup/test-setup.sh@10 -- # uname -s 00:04:54.374 02:06:53 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:54.374 02:06:53 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:54.374 02:06:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.374 02:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.374 02:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:54.374 ************************************ 00:04:54.374 START TEST acl 00:04:54.374 ************************************ 00:04:54.375 02:06:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:54.636 * Looking for test storage... 00:04:54.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.636 02:06:53 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:54.636 02:06:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:54.636 02:06:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:54.636 02:06:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:54.636 02:06:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:54.636 02:06:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:54.636 02:06:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:54.636 02:06:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:54.636 02:06:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n2 00:04:54.636 02:06:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n2 00:04:54.636 02:06:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:54.636 02:06:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n3 00:04:54.636 02:06:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n3 00:04:54.636 02:06:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:54.636 02:06:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:54.636 02:06:53 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:54.636 02:06:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:54.636 02:06:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:54.636 02:06:53 -- setup/acl.sh@12 -- # devs=() 00:04:54.636 02:06:53 -- setup/acl.sh@12 -- # declare -a devs 00:04:54.636 02:06:53 -- setup/acl.sh@13 -- # drivers=() 00:04:54.636 02:06:53 -- setup/acl.sh@13 -- # declare -A drivers 00:04:54.636 02:06:53 -- setup/acl.sh@51 -- # setup reset 00:04:54.636 02:06:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.636 02:06:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.204 02:06:54 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:55.205 02:06:54 -- setup/acl.sh@16 -- # local dev driver 00:04:55.205 02:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.205 02:06:54 -- setup/acl.sh@15 -- # setup output status 00:04:55.205 02:06:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.205 02:06:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.464 Hugepages 00:04:55.464 node hugesize free / total 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # continue 00:04:55.464 02:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.464 00:04:55.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # continue 00:04:55.464 02:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:55.464 02:06:54 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:55.464 02:06:54 -- setup/acl.sh@20 -- # continue 00:04:55.464 02:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.464 02:06:54 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:55.464 02:06:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.464 02:06:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:55.464 02:06:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.464 02:06:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.464 02:06:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.722 02:06:55 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:55.722 02:06:55 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.722 02:06:55 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:55.722 02:06:55 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.722 02:06:55 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.722 02:06:55 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.722 02:06:55 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:55.722 02:06:55 -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.722 02:06:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.722 02:06:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.722 02:06:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.722 ************************************ 00:04:55.722 START TEST denied 00:04:55.722 ************************************ 00:04:55.722 02:06:55 -- common/autotest_common.sh@1104 -- # denied 00:04:55.722 02:06:55 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:55.722 02:06:55 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:55.722 02:06:55 -- setup/acl.sh@38 -- # setup output config 00:04:55.722 02:06:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.722 02:06:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.656 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:56.656 02:06:55 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:56.656 02:06:55 -- setup/acl.sh@28 -- # local dev driver 00:04:56.656 02:06:55 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:56.656 02:06:55 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:56.656 02:06:55 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:56.656 02:06:55 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:56.656 02:06:55 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:56.656 02:06:55 -- setup/acl.sh@41 -- # setup reset 00:04:56.656 02:06:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:56.656 02:06:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.222 00:04:57.222 real 0m1.475s 00:04:57.222 user 0m0.598s 00:04:57.222 sys 0m0.814s 00:04:57.222 02:06:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.222 ************************************ 00:04:57.222 END TEST denied 00:04:57.222 ************************************ 00:04:57.222 02:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.222 02:06:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.222 02:06:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.222 02:06:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.222 02:06:56 -- common/autotest_common.sh@10 -- # set +x 00:04:57.222 ************************************ 00:04:57.222 START TEST allowed 00:04:57.222 ************************************ 00:04:57.222 02:06:56 -- common/autotest_common.sh@1104 -- # allowed 00:04:57.222 02:06:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:57.222 02:06:56 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:57.222 02:06:56 -- setup/acl.sh@45 -- # setup output config 00:04:57.222 02:06:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.222 02:06:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:57.788 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.788 02:06:57 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:57.788 02:06:57 -- setup/acl.sh@28 -- # local dev driver 00:04:57.788 02:06:57 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:57.788 02:06:57 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:57.788 02:06:57 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:58.047 02:06:57 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:58.047 02:06:57 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:58.047 02:06:57 -- setup/acl.sh@48 -- # setup reset 00:04:58.047 02:06:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.047 02:06:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.615 00:04:58.615 real 0m1.488s 00:04:58.615 user 0m0.689s 00:04:58.615 sys 0m0.789s 00:04:58.615 02:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.615 ************************************ 00:04:58.615 02:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.615 END TEST allowed 00:04:58.615 ************************************ 00:04:58.615 00:04:58.615 real 0m4.205s 00:04:58.615 user 0m1.848s 00:04:58.615 sys 0m2.308s 00:04:58.615 02:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.615 ************************************ 00:04:58.615 END TEST acl 00:04:58.615 ************************************ 00:04:58.615 02:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.615 02:06:58 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:58.615 02:06:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.615 02:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.615 02:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.615 ************************************ 00:04:58.615 START TEST hugepages 00:04:58.615 ************************************ 00:04:58.615 02:06:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:58.874 * Looking for test storage... 00:04:58.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:58.874 02:06:58 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:58.874 02:06:58 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:58.874 02:06:58 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:58.874 02:06:58 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:58.874 02:06:58 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:58.875 02:06:58 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:58.875 02:06:58 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:58.875 02:06:58 -- setup/common.sh@18 -- # local node= 00:04:58.875 02:06:58 -- setup/common.sh@19 -- # local var val 00:04:58.875 02:06:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.875 02:06:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.875 02:06:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.875 02:06:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.875 02:06:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.875 02:06:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4757584 kB' 'MemAvailable: 7368824 kB' 'Buffers: 2436 kB' 'Cached: 2813672 kB' 'SwapCached: 0 kB' 'Active: 475696 kB' 'Inactive: 2443376 kB' 'Active(anon): 113456 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 104620 kB' 'Mapped: 48672 kB' 'Shmem: 10492 kB' 'KReclaimable: 85128 kB' 'Slab: 164888 kB' 'SReclaimable: 85128 kB' 'SUnreclaim: 79760 kB' 'KernelStack: 6572 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 338688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.875 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.875 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # continue 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.876 02:06:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.876 02:06:58 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:58.876 02:06:58 -- setup/common.sh@33 -- # echo 2048 00:04:58.876 02:06:58 -- setup/common.sh@33 -- # return 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:58.876 02:06:58 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:58.876 02:06:58 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:58.876 02:06:58 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:58.876 02:06:58 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:58.876 02:06:58 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:58.876 02:06:58 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:58.876 02:06:58 -- setup/hugepages.sh@207 -- # get_nodes 00:04:58.876 02:06:58 -- setup/hugepages.sh@27 -- # local node 00:04:58.876 02:06:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.876 02:06:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:58.876 02:06:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:58.876 02:06:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.876 02:06:58 -- setup/hugepages.sh@208 -- # clear_hp 00:04:58.876 02:06:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:58.876 02:06:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:58.876 02:06:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.876 02:06:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:58.876 02:06:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:58.876 02:06:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:58.876 02:06:58 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:58.876 02:06:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.876 02:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.876 02:06:58 -- common/autotest_common.sh@10 -- # set +x 00:04:58.876 ************************************ 00:04:58.876 START TEST default_setup 00:04:58.876 ************************************ 00:04:58.876 02:06:58 -- common/autotest_common.sh@1104 -- # default_setup 00:04:58.876 02:06:58 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.876 02:06:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.876 02:06:58 -- setup/hugepages.sh@51 -- # shift 00:04:58.876 02:06:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.876 02:06:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.876 02:06:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.876 02:06:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.876 02:06:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.876 02:06:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.876 02:06:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.876 02:06:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.876 02:06:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.876 02:06:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.876 02:06:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.876 02:06:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.876 02:06:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.876 02:06:58 -- setup/hugepages.sh@73 -- # return 0 00:04:58.876 02:06:58 -- setup/hugepages.sh@137 -- # setup output 00:04:58.876 02:06:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.876 02:06:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.705 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.705 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.705 02:06:59 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:59.705 02:06:59 -- setup/hugepages.sh@89 -- # local node 00:04:59.705 02:06:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.705 02:06:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.705 02:06:59 -- setup/hugepages.sh@92 -- # local surp 00:04:59.705 02:06:59 -- setup/hugepages.sh@93 -- # local resv 00:04:59.705 02:06:59 -- setup/hugepages.sh@94 -- # local anon 00:04:59.705 02:06:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.705 02:06:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.705 02:06:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.705 02:06:59 -- setup/common.sh@18 -- # local node= 00:04:59.705 02:06:59 -- setup/common.sh@19 -- # local var val 00:04:59.705 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.705 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.705 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.705 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.705 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.705 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864244 kB' 'MemAvailable: 9475296 kB' 'Buffers: 2436 kB' 'Cached: 2813668 kB' 'SwapCached: 0 kB' 'Active: 492080 kB' 'Inactive: 2443380 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 48792 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164396 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6576 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.705 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.705 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.706 02:06:59 -- setup/common.sh@33 -- # echo 0 00:04:59.706 02:06:59 -- setup/common.sh@33 -- # return 0 00:04:59.706 02:06:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.706 02:06:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.706 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.706 02:06:59 -- setup/common.sh@18 -- # local node= 00:04:59.706 02:06:59 -- setup/common.sh@19 -- # local var val 00:04:59.706 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.706 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.706 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.706 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.706 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.706 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864244 kB' 'MemAvailable: 9475296 kB' 'Buffers: 2436 kB' 'Cached: 2813668 kB' 'SwapCached: 0 kB' 'Active: 491980 kB' 'Inactive: 2443380 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120624 kB' 'Mapped: 48792 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164392 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.706 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.706 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.707 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.707 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.708 02:06:59 -- setup/common.sh@33 -- # echo 0 00:04:59.708 02:06:59 -- setup/common.sh@33 -- # return 0 00:04:59.708 02:06:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.708 02:06:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.708 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.708 02:06:59 -- setup/common.sh@18 -- # local node= 00:04:59.708 02:06:59 -- setup/common.sh@19 -- # local var val 00:04:59.708 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.708 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.708 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.708 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.708 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.708 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864244 kB' 'MemAvailable: 9475296 kB' 'Buffers: 2436 kB' 'Cached: 2813668 kB' 'SwapCached: 0 kB' 'Active: 491864 kB' 'Inactive: 2443380 kB' 'Active(anon): 129624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120524 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164392 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6576 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.708 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.708 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.709 02:06:59 -- setup/common.sh@33 -- # echo 0 00:04:59.709 02:06:59 -- setup/common.sh@33 -- # return 0 00:04:59.709 02:06:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.709 02:06:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.709 nr_hugepages=1024 00:04:59.709 resv_hugepages=0 00:04:59.709 02:06:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.709 surplus_hugepages=0 00:04:59.709 02:06:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.709 anon_hugepages=0 00:04:59.709 02:06:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.709 02:06:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.709 02:06:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.709 02:06:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.709 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.709 02:06:59 -- setup/common.sh@18 -- # local node= 00:04:59.709 02:06:59 -- setup/common.sh@19 -- # local var val 00:04:59.709 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.709 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.709 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.709 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.709 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.709 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.709 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864604 kB' 'MemAvailable: 9475656 kB' 'Buffers: 2436 kB' 'Cached: 2813668 kB' 'SwapCached: 0 kB' 'Active: 491744 kB' 'Inactive: 2443380 kB' 'Active(anon): 129504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120392 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164380 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6528 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.709 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.709 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.710 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.710 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.710 02:06:59 -- setup/common.sh@33 -- # echo 1024 00:04:59.710 02:06:59 -- setup/common.sh@33 -- # return 0 00:04:59.710 02:06:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.710 02:06:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.710 02:06:59 -- setup/hugepages.sh@27 -- # local node 00:04:59.710 02:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.711 02:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.711 02:06:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.711 02:06:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.711 02:06:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.711 02:06:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.711 02:06:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.711 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.711 02:06:59 -- setup/common.sh@18 -- # local node=0 00:04:59.711 02:06:59 -- setup/common.sh@19 -- # local var val 00:04:59.711 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.711 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.711 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.711 02:06:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.711 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.711 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864356 kB' 'MemUsed: 5377624 kB' 'SwapCached: 0 kB' 'Active: 491452 kB' 'Inactive: 2443380 kB' 'Active(anon): 129212 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443380 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2816104 kB' 'Mapped: 48672 kB' 'AnonPages: 120564 kB' 'Shmem: 10476 kB' 'KernelStack: 6560 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84740 kB' 'Slab: 164376 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.711 02:06:59 -- setup/common.sh@32 -- # continue 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.711 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.712 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.712 02:06:59 -- setup/common.sh@33 -- # echo 0 00:04:59.712 02:06:59 -- setup/common.sh@33 -- # return 0 00:04:59.712 02:06:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.712 02:06:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.712 02:06:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.712 02:06:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.712 02:06:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.712 node0=1024 expecting 1024 00:04:59.712 02:06:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.712 00:04:59.712 real 0m0.983s 00:04:59.712 user 0m0.470s 00:04:59.712 sys 0m0.468s 00:04:59.712 02:06:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.712 02:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:59.712 ************************************ 00:04:59.712 END TEST default_setup 00:04:59.712 ************************************ 00:04:59.970 02:06:59 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:59.971 02:06:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.971 02:06:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.971 02:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:59.971 ************************************ 00:04:59.971 START TEST per_node_1G_alloc 00:04:59.971 ************************************ 00:04:59.971 02:06:59 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:59.971 02:06:59 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:59.971 02:06:59 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:59.971 02:06:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.971 02:06:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.971 02:06:59 -- setup/hugepages.sh@51 -- # shift 00:04:59.971 02:06:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.971 02:06:59 -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.971 02:06:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.971 02:06:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.971 02:06:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.971 02:06:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.971 02:06:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.971 02:06:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.971 02:06:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.971 02:06:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.971 02:06:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.971 02:06:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.971 02:06:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.971 02:06:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:59.971 02:06:59 -- setup/hugepages.sh@73 -- # return 0 00:04:59.971 02:06:59 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:59.971 02:06:59 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:59.971 02:06:59 -- setup/hugepages.sh@146 -- # setup output 00:04:59.971 02:06:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.971 02:06:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.231 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.231 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.231 02:06:59 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:00.231 02:06:59 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:00.231 02:06:59 -- setup/hugepages.sh@89 -- # local node 00:05:00.231 02:06:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.231 02:06:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.231 02:06:59 -- setup/hugepages.sh@92 -- # local surp 00:05:00.231 02:06:59 -- setup/hugepages.sh@93 -- # local resv 00:05:00.231 02:06:59 -- setup/hugepages.sh@94 -- # local anon 00:05:00.231 02:06:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.231 02:06:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.231 02:06:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.231 02:06:59 -- setup/common.sh@18 -- # local node= 00:05:00.231 02:06:59 -- setup/common.sh@19 -- # local var val 00:05:00.231 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.231 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.231 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.231 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.231 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.231 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.231 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910204 kB' 'MemAvailable: 10521268 kB' 'Buffers: 2436 kB' 'Cached: 2813672 kB' 'SwapCached: 0 kB' 'Active: 491940 kB' 'Inactive: 2443392 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120900 kB' 'Mapped: 48796 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164444 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79704 kB' 'KernelStack: 6536 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.232 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.232 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.233 02:06:59 -- setup/common.sh@33 -- # echo 0 00:05:00.233 02:06:59 -- setup/common.sh@33 -- # return 0 00:05:00.233 02:06:59 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.233 02:06:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.233 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.233 02:06:59 -- setup/common.sh@18 -- # local node= 00:05:00.233 02:06:59 -- setup/common.sh@19 -- # local var val 00:05:00.233 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.233 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.233 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.233 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.233 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.233 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910204 kB' 'MemAvailable: 10521268 kB' 'Buffers: 2436 kB' 'Cached: 2813672 kB' 'SwapCached: 0 kB' 'Active: 491616 kB' 'Inactive: 2443392 kB' 'Active(anon): 129376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120508 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164448 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79708 kB' 'KernelStack: 6576 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.233 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.233 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.234 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.234 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.235 02:06:59 -- setup/common.sh@33 -- # echo 0 00:05:00.235 02:06:59 -- setup/common.sh@33 -- # return 0 00:05:00.235 02:06:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.235 02:06:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.235 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.235 02:06:59 -- setup/common.sh@18 -- # local node= 00:05:00.235 02:06:59 -- setup/common.sh@19 -- # local var val 00:05:00.235 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.235 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.235 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.235 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.235 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.235 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910336 kB' 'MemAvailable: 10521400 kB' 'Buffers: 2436 kB' 'Cached: 2813672 kB' 'SwapCached: 0 kB' 'Active: 491752 kB' 'Inactive: 2443392 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164448 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79708 kB' 'KernelStack: 6560 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.235 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.235 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.236 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.236 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.497 02:06:59 -- setup/common.sh@33 -- # echo 0 00:05:00.497 02:06:59 -- setup/common.sh@33 -- # return 0 00:05:00.497 02:06:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.497 nr_hugepages=512 00:05:00.497 02:06:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:00.497 resv_hugepages=0 00:05:00.497 02:06:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.497 surplus_hugepages=0 00:05:00.497 02:06:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.497 anon_hugepages=0 00:05:00.497 02:06:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.497 02:06:59 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.497 02:06:59 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:00.497 02:06:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.497 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.497 02:06:59 -- setup/common.sh@18 -- # local node= 00:05:00.497 02:06:59 -- setup/common.sh@19 -- # local var val 00:05:00.497 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.497 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.497 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.497 02:06:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.497 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.497 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910336 kB' 'MemAvailable: 10521400 kB' 'Buffers: 2436 kB' 'Cached: 2813672 kB' 'SwapCached: 0 kB' 'Active: 491784 kB' 'Inactive: 2443392 kB' 'Active(anon): 129544 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120708 kB' 'Mapped: 48672 kB' 'Shmem: 10476 kB' 'KReclaimable: 84740 kB' 'Slab: 164448 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79708 kB' 'KernelStack: 6560 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.497 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.497 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.498 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.498 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.498 02:06:59 -- setup/common.sh@33 -- # echo 512 00:05:00.498 02:06:59 -- setup/common.sh@33 -- # return 0 00:05:00.498 02:06:59 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.498 02:06:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.498 02:06:59 -- setup/hugepages.sh@27 -- # local node 00:05:00.498 02:06:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.498 02:06:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.498 02:06:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.498 02:06:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.498 02:06:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.498 02:06:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.498 02:06:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.498 02:06:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.498 02:06:59 -- setup/common.sh@18 -- # local node=0 00:05:00.498 02:06:59 -- setup/common.sh@19 -- # local var val 00:05:00.498 02:06:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.498 02:06:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.498 02:06:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.499 02:06:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.499 02:06:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.499 02:06:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7910336 kB' 'MemUsed: 4331644 kB' 'SwapCached: 0 kB' 'Active: 491732 kB' 'Inactive: 2443392 kB' 'Active(anon): 129492 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2816108 kB' 'Mapped: 48672 kB' 'AnonPages: 120668 kB' 'Shmem: 10476 kB' 'KernelStack: 6544 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84740 kB' 'Slab: 164448 kB' 'SReclaimable: 84740 kB' 'SUnreclaim: 79708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # continue 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.499 02:06:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.499 02:06:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.499 02:06:59 -- setup/common.sh@33 -- # echo 0 00:05:00.499 02:06:59 -- setup/common.sh@33 -- # return 0 00:05:00.499 02:06:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.499 02:06:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.499 02:06:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.499 02:06:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.499 node0=512 expecting 512 00:05:00.499 02:06:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.500 02:06:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.500 00:05:00.500 real 0m0.556s 00:05:00.500 user 0m0.284s 00:05:00.500 sys 0m0.308s 00:05:00.500 02:06:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.500 02:06:59 -- common/autotest_common.sh@10 -- # set +x 00:05:00.500 ************************************ 00:05:00.500 END TEST per_node_1G_alloc 00:05:00.500 ************************************ 00:05:00.500 02:06:59 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:00.500 02:06:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.500 02:06:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.500 02:06:59 -- common/autotest_common.sh@10 -- # set +x 00:05:00.500 ************************************ 00:05:00.500 START TEST even_2G_alloc 00:05:00.500 ************************************ 00:05:00.500 02:06:59 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:00.500 02:06:59 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:00.500 02:06:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.500 02:06:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.500 02:06:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:00.500 02:06:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:00.500 02:06:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.500 02:06:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.500 02:06:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.500 02:06:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.500 02:06:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.500 02:06:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:00.500 02:06:59 -- setup/hugepages.sh@83 -- # : 0 00:05:00.500 02:06:59 -- setup/hugepages.sh@84 -- # : 0 00:05:00.500 02:06:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:00.500 02:06:59 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:00.500 02:06:59 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:00.500 02:06:59 -- setup/hugepages.sh@153 -- # setup output 00:05:00.500 02:06:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.500 02:06:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.760 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.760 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.760 02:07:00 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:00.760 02:07:00 -- setup/hugepages.sh@89 -- # local node 00:05:00.760 02:07:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.760 02:07:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.760 02:07:00 -- setup/hugepages.sh@92 -- # local surp 00:05:00.760 02:07:00 -- setup/hugepages.sh@93 -- # local resv 00:05:00.760 02:07:00 -- setup/hugepages.sh@94 -- # local anon 00:05:00.760 02:07:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.760 02:07:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.760 02:07:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.760 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:00.760 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:00.760 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.760 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.760 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.760 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.760 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.760 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6854828 kB' 'MemAvailable: 9465896 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491704 kB' 'Inactive: 2443388 kB' 'Active(anon): 129464 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120848 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164420 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79664 kB' 'KernelStack: 6552 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.760 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.760 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.761 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:00.761 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:00.761 02:07:00 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.761 02:07:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.761 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.761 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:00.761 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:00.761 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.761 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.761 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.761 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.761 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.761 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6854828 kB' 'MemAvailable: 9465896 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491668 kB' 'Inactive: 2443388 kB' 'Active(anon): 129428 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120840 kB' 'Mapped: 48808 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164420 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79664 kB' 'KernelStack: 6584 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.761 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.761 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # continue 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.762 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.762 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.023 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.023 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.024 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.024 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.024 02:07:00 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.024 02:07:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.024 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.024 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.024 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.024 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.024 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.024 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.024 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.024 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.024 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6854904 kB' 'MemAvailable: 9465972 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491112 kB' 'Inactive: 2443388 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120288 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164416 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 6544 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.024 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.024 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.025 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.025 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.025 02:07:00 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.025 nr_hugepages=1024 00:05:01.025 02:07:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.025 resv_hugepages=0 00:05:01.025 02:07:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.025 surplus_hugepages=0 00:05:01.025 02:07:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.025 anon_hugepages=0 00:05:01.025 02:07:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.025 02:07:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.025 02:07:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.025 02:07:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.025 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.025 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.025 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.025 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.025 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.025 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.025 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.025 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.025 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6855584 kB' 'MemAvailable: 9466652 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491260 kB' 'Inactive: 2443388 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120244 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6560 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.025 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.025 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.026 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.026 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.027 02:07:00 -- setup/common.sh@33 -- # echo 1024 00:05:01.027 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.027 02:07:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.027 02:07:00 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.027 02:07:00 -- setup/hugepages.sh@27 -- # local node 00:05:01.027 02:07:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.027 02:07:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.027 02:07:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.027 02:07:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.027 02:07:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.027 02:07:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.027 02:07:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.027 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.027 02:07:00 -- setup/common.sh@18 -- # local node=0 00:05:01.027 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.027 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.027 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.027 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.027 02:07:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.027 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.027 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6855584 kB' 'MemUsed: 5386396 kB' 'SwapCached: 0 kB' 'Active: 491212 kB' 'Inactive: 2443388 kB' 'Active(anon): 128972 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2816096 kB' 'Mapped: 48672 kB' 'AnonPages: 120456 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84756 kB' 'Slab: 164408 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.027 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.027 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.028 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.028 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.028 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.028 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.028 02:07:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.028 02:07:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.028 02:07:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.028 node0=1024 expecting 1024 00:05:01.028 02:07:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.028 02:07:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.028 00:05:01.028 real 0m0.512s 00:05:01.028 user 0m0.264s 00:05:01.028 sys 0m0.284s 00:05:01.028 02:07:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.028 02:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.028 ************************************ 00:05:01.028 END TEST even_2G_alloc 00:05:01.028 ************************************ 00:05:01.028 02:07:00 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:01.028 02:07:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.028 02:07:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.028 02:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.028 ************************************ 00:05:01.028 START TEST odd_alloc 00:05:01.028 ************************************ 00:05:01.028 02:07:00 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:01.028 02:07:00 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:01.028 02:07:00 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:01.028 02:07:00 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:01.028 02:07:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.028 02:07:00 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.028 02:07:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.028 02:07:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:01.028 02:07:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.028 02:07:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.028 02:07:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.028 02:07:00 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:01.028 02:07:00 -- setup/hugepages.sh@83 -- # : 0 00:05:01.028 02:07:00 -- setup/hugepages.sh@84 -- # : 0 00:05:01.028 02:07:00 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.028 02:07:00 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:01.028 02:07:00 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:01.028 02:07:00 -- setup/hugepages.sh@160 -- # setup output 00:05:01.028 02:07:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.028 02:07:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.287 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.287 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.549 02:07:00 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:01.549 02:07:00 -- setup/hugepages.sh@89 -- # local node 00:05:01.549 02:07:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.549 02:07:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.549 02:07:00 -- setup/hugepages.sh@92 -- # local surp 00:05:01.549 02:07:00 -- setup/hugepages.sh@93 -- # local resv 00:05:01.549 02:07:00 -- setup/hugepages.sh@94 -- # local anon 00:05:01.549 02:07:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.549 02:07:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.549 02:07:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.549 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.549 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.549 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.549 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.549 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.549 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.549 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.549 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6852716 kB' 'MemAvailable: 9463784 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491876 kB' 'Inactive: 2443388 kB' 'Active(anon): 129636 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120772 kB' 'Mapped: 48800 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164396 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6552 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.549 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.549 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.550 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.550 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.550 02:07:00 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.550 02:07:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.550 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.550 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.550 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.550 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.550 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.550 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.550 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.550 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.550 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6852716 kB' 'MemAvailable: 9463784 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491548 kB' 'Inactive: 2443388 kB' 'Active(anon): 129308 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164400 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79644 kB' 'KernelStack: 6576 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.550 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.550 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.551 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.551 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.551 02:07:00 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.551 02:07:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.551 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.551 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.551 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.551 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.551 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.551 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.551 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.551 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.551 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6852968 kB' 'MemAvailable: 9464036 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491720 kB' 'Inactive: 2443388 kB' 'Active(anon): 129480 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120416 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164384 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79628 kB' 'KernelStack: 6560 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.551 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.551 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.552 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.552 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.552 02:07:00 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.552 nr_hugepages=1025 00:05:01.552 02:07:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:01.552 resv_hugepages=0 00:05:01.552 02:07:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.552 surplus_hugepages=0 00:05:01.552 02:07:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.552 anon_hugepages=0 00:05:01.552 02:07:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.552 02:07:00 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:01.552 02:07:00 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:01.552 02:07:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.552 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.552 02:07:00 -- setup/common.sh@18 -- # local node= 00:05:01.552 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.552 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.552 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.552 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.552 02:07:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.552 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.552 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6852968 kB' 'MemAvailable: 9464036 kB' 'Buffers: 2436 kB' 'Cached: 2813660 kB' 'SwapCached: 0 kB' 'Active: 491504 kB' 'Inactive: 2443388 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120460 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164384 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79628 kB' 'KernelStack: 6528 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.552 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.552 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.552 02:07:00 -- setup/common.sh@33 -- # echo 1025 00:05:01.552 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.552 02:07:00 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:01.552 02:07:00 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.552 02:07:00 -- setup/hugepages.sh@27 -- # local node 00:05:01.552 02:07:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.552 02:07:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:01.552 02:07:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.552 02:07:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.552 02:07:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.553 02:07:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.553 02:07:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.553 02:07:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.553 02:07:00 -- setup/common.sh@18 -- # local node=0 00:05:01.553 02:07:00 -- setup/common.sh@19 -- # local var val 00:05:01.553 02:07:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.553 02:07:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.553 02:07:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.553 02:07:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.553 02:07:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.553 02:07:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6852968 kB' 'MemUsed: 5389012 kB' 'SwapCached: 0 kB' 'Active: 491568 kB' 'Inactive: 2443388 kB' 'Active(anon): 129328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443388 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2816096 kB' 'Mapped: 48672 kB' 'AnonPages: 120544 kB' 'Shmem: 10468 kB' 'KernelStack: 6560 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84756 kB' 'Slab: 164384 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # continue 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.553 02:07:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.553 02:07:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.553 02:07:00 -- setup/common.sh@33 -- # echo 0 00:05:01.553 02:07:00 -- setup/common.sh@33 -- # return 0 00:05:01.553 02:07:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.553 02:07:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.553 02:07:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.553 02:07:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.553 node0=1025 expecting 1025 00:05:01.553 02:07:00 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:01.553 02:07:00 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:01.553 00:05:01.553 real 0m0.536s 00:05:01.553 user 0m0.274s 00:05:01.553 sys 0m0.295s 00:05:01.553 02:07:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.553 02:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:01.553 ************************************ 00:05:01.553 END TEST odd_alloc 00:05:01.553 ************************************ 00:05:01.553 02:07:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:01.553 02:07:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.553 02:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.553 02:07:01 -- common/autotest_common.sh@10 -- # set +x 00:05:01.553 ************************************ 00:05:01.553 START TEST custom_alloc 00:05:01.553 ************************************ 00:05:01.553 02:07:01 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:01.553 02:07:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:01.553 02:07:01 -- setup/hugepages.sh@169 -- # local node 00:05:01.553 02:07:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:01.553 02:07:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:01.553 02:07:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:01.553 02:07:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:01.553 02:07:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:01.553 02:07:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.553 02:07:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.553 02:07:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.553 02:07:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.553 02:07:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.553 02:07:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.553 02:07:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@83 -- # : 0 00:05:01.553 02:07:01 -- setup/hugepages.sh@84 -- # : 0 00:05:01.553 02:07:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:01.553 02:07:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:01.553 02:07:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:01.553 02:07:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.553 02:07:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.553 02:07:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.553 02:07:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.553 02:07:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.553 02:07:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:01.553 02:07:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:01.553 02:07:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:01.553 02:07:01 -- setup/hugepages.sh@78 -- # return 0 00:05:01.553 02:07:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:01.553 02:07:01 -- setup/hugepages.sh@187 -- # setup output 00:05:01.553 02:07:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.553 02:07:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.123 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.123 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.123 02:07:01 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:02.123 02:07:01 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:02.123 02:07:01 -- setup/hugepages.sh@89 -- # local node 00:05:02.123 02:07:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.123 02:07:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.123 02:07:01 -- setup/hugepages.sh@92 -- # local surp 00:05:02.123 02:07:01 -- setup/hugepages.sh@93 -- # local resv 00:05:02.123 02:07:01 -- setup/hugepages.sh@94 -- # local anon 00:05:02.123 02:07:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.123 02:07:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.123 02:07:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.123 02:07:01 -- setup/common.sh@18 -- # local node= 00:05:02.123 02:07:01 -- setup/common.sh@19 -- # local var val 00:05:02.123 02:07:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.123 02:07:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.123 02:07:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.123 02:07:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.123 02:07:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.123 02:07:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7913436 kB' 'MemAvailable: 10524508 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491940 kB' 'Inactive: 2443392 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120792 kB' 'Mapped: 48792 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164392 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79636 kB' 'KernelStack: 6548 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.123 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.123 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.124 02:07:01 -- setup/common.sh@33 -- # echo 0 00:05:02.124 02:07:01 -- setup/common.sh@33 -- # return 0 00:05:02.124 02:07:01 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.124 02:07:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.124 02:07:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.124 02:07:01 -- setup/common.sh@18 -- # local node= 00:05:02.124 02:07:01 -- setup/common.sh@19 -- # local var val 00:05:02.124 02:07:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.124 02:07:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.124 02:07:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.124 02:07:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.124 02:07:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.124 02:07:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7913796 kB' 'MemAvailable: 10524868 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491688 kB' 'Inactive: 2443392 kB' 'Active(anon): 129448 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120540 kB' 'Mapped: 48736 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164396 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6548 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.124 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.124 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.125 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.125 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.126 02:07:01 -- setup/common.sh@33 -- # echo 0 00:05:02.126 02:07:01 -- setup/common.sh@33 -- # return 0 00:05:02.126 02:07:01 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.126 02:07:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.126 02:07:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.126 02:07:01 -- setup/common.sh@18 -- # local node= 00:05:02.126 02:07:01 -- setup/common.sh@19 -- # local var val 00:05:02.126 02:07:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.126 02:07:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.126 02:07:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.126 02:07:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.126 02:07:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.126 02:07:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7913544 kB' 'MemAvailable: 10524616 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491592 kB' 'Inactive: 2443392 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164392 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79636 kB' 'KernelStack: 6544 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.126 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.126 02:07:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.127 02:07:01 -- setup/common.sh@33 -- # echo 0 00:05:02.127 02:07:01 -- setup/common.sh@33 -- # return 0 00:05:02.127 02:07:01 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.127 nr_hugepages=512 00:05:02.127 02:07:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:02.127 resv_hugepages=0 00:05:02.127 02:07:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.127 surplus_hugepages=0 00:05:02.127 02:07:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.127 anon_hugepages=0 00:05:02.127 02:07:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.127 02:07:01 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.127 02:07:01 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:02.127 02:07:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.127 02:07:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.127 02:07:01 -- setup/common.sh@18 -- # local node= 00:05:02.127 02:07:01 -- setup/common.sh@19 -- # local var val 00:05:02.127 02:07:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.127 02:07:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.127 02:07:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.127 02:07:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.127 02:07:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.127 02:07:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7913292 kB' 'MemAvailable: 10524364 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491500 kB' 'Inactive: 2443392 kB' 'Active(anon): 129260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 120388 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164388 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79632 kB' 'KernelStack: 6560 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.127 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.127 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.128 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.128 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.128 02:07:01 -- setup/common.sh@33 -- # echo 512 00:05:02.128 02:07:01 -- setup/common.sh@33 -- # return 0 00:05:02.128 02:07:01 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.128 02:07:01 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.128 02:07:01 -- setup/hugepages.sh@27 -- # local node 00:05:02.128 02:07:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.128 02:07:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.128 02:07:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.128 02:07:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.128 02:07:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.129 02:07:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.129 02:07:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.129 02:07:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.129 02:07:01 -- setup/common.sh@18 -- # local node=0 00:05:02.129 02:07:01 -- setup/common.sh@19 -- # local var val 00:05:02.129 02:07:01 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.129 02:07:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.129 02:07:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.129 02:07:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.129 02:07:01 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.129 02:07:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7913292 kB' 'MemUsed: 4328688 kB' 'SwapCached: 0 kB' 'Active: 491692 kB' 'Inactive: 2443392 kB' 'Active(anon): 129452 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2816100 kB' 'Mapped: 48672 kB' 'AnonPages: 120580 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84756 kB' 'Slab: 164388 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # continue 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.129 02:07:01 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.129 02:07:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.129 02:07:01 -- setup/common.sh@33 -- # echo 0 00:05:02.129 02:07:01 -- setup/common.sh@33 -- # return 0 00:05:02.130 02:07:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.130 02:07:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.130 02:07:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.130 02:07:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.130 node0=512 expecting 512 00:05:02.130 02:07:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.130 02:07:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.130 00:05:02.130 real 0m0.538s 00:05:02.130 user 0m0.270s 00:05:02.130 sys 0m0.301s 00:05:02.130 02:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.130 02:07:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.130 ************************************ 00:05:02.130 END TEST custom_alloc 00:05:02.130 ************************************ 00:05:02.130 02:07:01 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:02.130 02:07:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.130 02:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.130 02:07:01 -- common/autotest_common.sh@10 -- # set +x 00:05:02.130 ************************************ 00:05:02.130 START TEST no_shrink_alloc 00:05:02.130 ************************************ 00:05:02.130 02:07:01 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:02.130 02:07:01 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:02.130 02:07:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.130 02:07:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:02.130 02:07:01 -- setup/hugepages.sh@51 -- # shift 00:05:02.130 02:07:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:02.130 02:07:01 -- setup/hugepages.sh@52 -- # local node_ids 00:05:02.130 02:07:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.130 02:07:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.130 02:07:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:02.130 02:07:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:02.130 02:07:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.130 02:07:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.130 02:07:01 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.130 02:07:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.130 02:07:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.130 02:07:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:02.130 02:07:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:02.130 02:07:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:02.130 02:07:01 -- setup/hugepages.sh@73 -- # return 0 00:05:02.130 02:07:01 -- setup/hugepages.sh@198 -- # setup output 00:05:02.130 02:07:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.130 02:07:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.699 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.699 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.699 02:07:02 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:02.699 02:07:02 -- setup/hugepages.sh@89 -- # local node 00:05:02.699 02:07:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.699 02:07:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.699 02:07:02 -- setup/hugepages.sh@92 -- # local surp 00:05:02.699 02:07:02 -- setup/hugepages.sh@93 -- # local resv 00:05:02.699 02:07:02 -- setup/hugepages.sh@94 -- # local anon 00:05:02.699 02:07:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.699 02:07:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.699 02:07:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.699 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:02.699 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:02.699 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.699 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.699 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.699 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.699 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.699 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.699 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864752 kB' 'MemAvailable: 9475824 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491888 kB' 'Inactive: 2443392 kB' 'Active(anon): 129648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121160 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6560 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.700 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.700 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.701 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:02.701 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:02.701 02:07:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.701 02:07:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.701 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.701 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:02.701 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:02.701 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.701 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.701 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.701 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.701 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.701 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864752 kB' 'MemAvailable: 9475824 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491632 kB' 'Inactive: 2443392 kB' 'Active(anon): 129392 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120460 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164416 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 6528 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.701 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.701 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.702 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.702 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.703 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:02.703 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:02.703 02:07:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.703 02:07:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.703 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.703 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:02.703 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:02.703 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.703 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.703 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.703 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.703 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.703 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6864752 kB' 'MemAvailable: 9475824 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491620 kB' 'Inactive: 2443392 kB' 'Active(anon): 129380 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120452 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164408 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6512 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.703 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.703 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.704 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:02.704 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:02.704 02:07:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.704 nr_hugepages=1024 00:05:02.704 02:07:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.704 resv_hugepages=0 00:05:02.704 02:07:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.704 surplus_hugepages=0 00:05:02.704 02:07:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.704 anon_hugepages=0 00:05:02.704 02:07:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.704 02:07:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.704 02:07:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.704 02:07:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.704 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.704 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:02.704 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:02.704 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.704 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.704 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.704 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.704 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.704 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6865052 kB' 'MemAvailable: 9476124 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491516 kB' 'Inactive: 2443392 kB' 'Active(anon): 129276 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120344 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164404 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79648 kB' 'KernelStack: 6496 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.704 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.704 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.705 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.705 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.706 02:07:02 -- setup/common.sh@33 -- # echo 1024 00:05:02.706 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:02.706 02:07:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.706 02:07:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.706 02:07:02 -- setup/hugepages.sh@27 -- # local node 00:05:02.706 02:07:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.706 02:07:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.706 02:07:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.706 02:07:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.706 02:07:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.706 02:07:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.706 02:07:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.706 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.706 02:07:02 -- setup/common.sh@18 -- # local node=0 00:05:02.706 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:02.706 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.706 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.706 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.706 02:07:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.706 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.706 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6865052 kB' 'MemUsed: 5376928 kB' 'SwapCached: 0 kB' 'Active: 491656 kB' 'Inactive: 2443392 kB' 'Active(anon): 129416 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2816100 kB' 'Mapped: 48672 kB' 'AnonPages: 120524 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84756 kB' 'Slab: 164404 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.706 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.706 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # continue 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.707 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.707 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.707 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:02.707 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:02.707 02:07:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.707 02:07:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.707 02:07:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.707 02:07:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.707 02:07:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.707 node0=1024 expecting 1024 00:05:02.707 02:07:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.707 02:07:02 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:02.707 02:07:02 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:02.707 02:07:02 -- setup/hugepages.sh@202 -- # setup output 00:05:02.707 02:07:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.707 02:07:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.228 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.228 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.228 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:03.228 02:07:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:03.228 02:07:02 -- setup/hugepages.sh@89 -- # local node 00:05:03.228 02:07:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.228 02:07:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.228 02:07:02 -- setup/hugepages.sh@92 -- # local surp 00:05:03.228 02:07:02 -- setup/hugepages.sh@93 -- # local resv 00:05:03.228 02:07:02 -- setup/hugepages.sh@94 -- # local anon 00:05:03.228 02:07:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.228 02:07:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.228 02:07:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.228 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:03.228 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:03.228 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.228 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.228 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.228 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.228 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.228 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6862396 kB' 'MemAvailable: 9473468 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491972 kB' 'Inactive: 2443392 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120936 kB' 'Mapped: 48856 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164416 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79660 kB' 'KernelStack: 6584 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.228 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.228 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.229 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:03.229 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:03.229 02:07:02 -- setup/hugepages.sh@97 -- # anon=0 00:05:03.229 02:07:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.229 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.229 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:03.229 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:03.229 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.229 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.229 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.229 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.229 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.229 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6862656 kB' 'MemAvailable: 9473728 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491680 kB' 'Inactive: 2443392 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120576 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6560 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.229 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.229 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.230 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:03.230 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:03.230 02:07:02 -- setup/hugepages.sh@99 -- # surp=0 00:05:03.230 02:07:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.230 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.230 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:03.230 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:03.230 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.230 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.230 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.230 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.230 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.230 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6863064 kB' 'MemAvailable: 9474136 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491612 kB' 'Inactive: 2443392 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120472 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6544 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.230 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.230 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.231 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.231 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.232 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:03.232 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:03.232 nr_hugepages=1024 00:05:03.232 resv_hugepages=0 00:05:03.232 surplus_hugepages=0 00:05:03.232 anon_hugepages=0 00:05:03.232 02:07:02 -- setup/hugepages.sh@100 -- # resv=0 00:05:03.232 02:07:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.232 02:07:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.232 02:07:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.232 02:07:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.232 02:07:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.232 02:07:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.232 02:07:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.232 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.232 02:07:02 -- setup/common.sh@18 -- # local node= 00:05:03.232 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:03.232 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.232 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.232 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.232 02:07:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.232 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.232 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6863732 kB' 'MemAvailable: 9474804 kB' 'Buffers: 2436 kB' 'Cached: 2813664 kB' 'SwapCached: 0 kB' 'Active: 491676 kB' 'Inactive: 2443392 kB' 'Active(anon): 129436 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120572 kB' 'Mapped: 48672 kB' 'Shmem: 10468 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'KernelStack: 6560 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.232 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.232 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.233 02:07:02 -- setup/common.sh@33 -- # echo 1024 00:05:03.233 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:03.233 02:07:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.233 02:07:02 -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.233 02:07:02 -- setup/hugepages.sh@27 -- # local node 00:05:03.233 02:07:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.233 02:07:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.233 02:07:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.233 02:07:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.233 02:07:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.233 02:07:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.233 02:07:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.233 02:07:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.233 02:07:02 -- setup/common.sh@18 -- # local node=0 00:05:03.233 02:07:02 -- setup/common.sh@19 -- # local var val 00:05:03.233 02:07:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.233 02:07:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.233 02:07:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.233 02:07:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.233 02:07:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.233 02:07:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6863732 kB' 'MemUsed: 5378248 kB' 'SwapCached: 0 kB' 'Active: 491408 kB' 'Inactive: 2443392 kB' 'Active(anon): 129168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362240 kB' 'Inactive(file): 2443392 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2816100 kB' 'Mapped: 48672 kB' 'AnonPages: 120280 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84756 kB' 'Slab: 164412 kB' 'SReclaimable: 84756 kB' 'SUnreclaim: 79656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.233 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.233 02:07:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # continue 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.234 02:07:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.234 02:07:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.234 02:07:02 -- setup/common.sh@33 -- # echo 0 00:05:03.234 02:07:02 -- setup/common.sh@33 -- # return 0 00:05:03.234 node0=1024 expecting 1024 00:05:03.234 ************************************ 00:05:03.234 END TEST no_shrink_alloc 00:05:03.234 ************************************ 00:05:03.234 02:07:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.234 02:07:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.234 02:07:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.234 02:07:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.234 02:07:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.234 02:07:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.234 00:05:03.234 real 0m1.106s 00:05:03.234 user 0m0.537s 00:05:03.234 sys 0m0.598s 00:05:03.234 02:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.234 02:07:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.492 02:07:02 -- setup/hugepages.sh@217 -- # clear_hp 00:05:03.492 02:07:02 -- setup/hugepages.sh@37 -- # local node hp 00:05:03.492 02:07:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.492 02:07:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.492 02:07:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:03.492 02:07:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.492 02:07:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:03.492 02:07:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.492 02:07:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.492 00:05:03.492 real 0m4.648s 00:05:03.492 user 0m2.254s 00:05:03.492 sys 0m2.499s 00:05:03.493 ************************************ 00:05:03.493 END TEST hugepages 00:05:03.493 ************************************ 00:05:03.493 02:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.493 02:07:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.493 02:07:02 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:03.493 02:07:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.493 02:07:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.493 02:07:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.493 ************************************ 00:05:03.493 START TEST driver 00:05:03.493 ************************************ 00:05:03.493 02:07:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:03.493 * Looking for test storage... 00:05:03.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:03.493 02:07:02 -- setup/driver.sh@68 -- # setup reset 00:05:03.493 02:07:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.493 02:07:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.060 02:07:03 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:04.060 02:07:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.060 02:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.060 02:07:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.060 ************************************ 00:05:04.060 START TEST guess_driver 00:05:04.060 ************************************ 00:05:04.060 02:07:03 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:04.060 02:07:03 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:04.060 02:07:03 -- setup/driver.sh@47 -- # local fail=0 00:05:04.060 02:07:03 -- setup/driver.sh@49 -- # pick_driver 00:05:04.060 02:07:03 -- setup/driver.sh@36 -- # vfio 00:05:04.060 02:07:03 -- setup/driver.sh@21 -- # local iommu_grups 00:05:04.060 02:07:03 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:04.060 02:07:03 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:04.060 02:07:03 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:04.060 02:07:03 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:04.060 02:07:03 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:04.060 02:07:03 -- setup/driver.sh@32 -- # return 1 00:05:04.060 02:07:03 -- setup/driver.sh@38 -- # uio 00:05:04.060 02:07:03 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:04.060 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:04.060 02:07:03 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:04.060 Looking for driver=uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:04.060 02:07:03 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:04.060 02:07:03 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:04.060 02:07:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.060 02:07:03 -- setup/driver.sh@45 -- # setup output config 00:05:04.060 02:07:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.060 02:07:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:04.626 02:07:04 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:04.626 02:07:04 -- setup/driver.sh@58 -- # continue 00:05:04.626 02:07:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.882 02:07:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.882 02:07:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.882 02:07:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.882 02:07:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.882 02:07:04 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:04.882 02:07:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.882 02:07:04 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:04.882 02:07:04 -- setup/driver.sh@65 -- # setup reset 00:05:04.882 02:07:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.882 02:07:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.448 ************************************ 00:05:05.448 END TEST guess_driver 00:05:05.448 ************************************ 00:05:05.448 00:05:05.448 real 0m1.477s 00:05:05.448 user 0m0.540s 00:05:05.448 sys 0m0.926s 00:05:05.448 02:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.448 02:07:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.448 ************************************ 00:05:05.448 END TEST driver 00:05:05.448 ************************************ 00:05:05.448 00:05:05.448 real 0m2.161s 00:05:05.448 user 0m0.764s 00:05:05.448 sys 0m1.425s 00:05:05.448 02:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.448 02:07:04 -- common/autotest_common.sh@10 -- # set +x 00:05:05.706 02:07:05 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:05.706 02:07:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.706 02:07:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.706 02:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.706 ************************************ 00:05:05.706 START TEST devices 00:05:05.706 ************************************ 00:05:05.706 02:07:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:05.706 * Looking for test storage... 00:05:05.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.706 02:07:05 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.706 02:07:05 -- setup/devices.sh@192 -- # setup reset 00:05:05.706 02:07:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.706 02:07:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.662 02:07:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:06.662 02:07:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:06.662 02:07:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:06.662 02:07:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:06.662 02:07:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.662 02:07:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:06.662 02:07:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:06.662 02:07:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.662 02:07:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:06.662 02:07:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:06.662 02:07:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.662 02:07:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:06.662 02:07:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:06.662 02:07:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:06.662 02:07:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:06.662 02:07:05 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:06.662 02:07:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:06.662 02:07:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:06.662 02:07:05 -- setup/devices.sh@196 -- # blocks=() 00:05:06.662 02:07:05 -- setup/devices.sh@196 -- # declare -a blocks 00:05:06.662 02:07:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:06.662 02:07:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:06.662 02:07:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:06.662 02:07:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.662 02:07:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:06.662 02:07:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:06.662 02:07:05 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:06.662 02:07:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:06.662 02:07:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:06.662 02:07:05 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:06.662 02:07:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:06.662 No valid GPT data, bailing 00:05:06.662 02:07:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.662 02:07:05 -- scripts/common.sh@393 -- # pt= 00:05:06.662 02:07:05 -- scripts/common.sh@394 -- # return 1 00:05:06.662 02:07:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:06.662 02:07:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:06.662 02:07:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:06.662 02:07:05 -- setup/common.sh@80 -- # echo 5368709120 00:05:06.662 02:07:05 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:06.662 02:07:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.662 02:07:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:06.662 02:07:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.662 02:07:05 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:06.662 02:07:05 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:06.662 02:07:05 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:06.662 02:07:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:06.662 02:07:05 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:06.662 02:07:05 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:06.662 02:07:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:06.662 No valid GPT data, bailing 00:05:06.662 02:07:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:06.662 02:07:06 -- scripts/common.sh@393 -- # pt= 00:05:06.662 02:07:06 -- scripts/common.sh@394 -- # return 1 00:05:06.662 02:07:06 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:06.662 02:07:06 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:06.662 02:07:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:06.662 02:07:06 -- setup/common.sh@80 -- # echo 4294967296 00:05:06.662 02:07:06 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:06.662 02:07:06 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.662 02:07:06 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:06.662 02:07:06 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.662 02:07:06 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:06.662 02:07:06 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:06.662 02:07:06 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:06.662 02:07:06 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:06.662 02:07:06 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:06.662 02:07:06 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:06.662 02:07:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:06.662 No valid GPT data, bailing 00:05:06.662 02:07:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:06.662 02:07:06 -- scripts/common.sh@393 -- # pt= 00:05:06.662 02:07:06 -- scripts/common.sh@394 -- # return 1 00:05:06.662 02:07:06 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:06.662 02:07:06 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:06.663 02:07:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:06.663 02:07:06 -- setup/common.sh@80 -- # echo 4294967296 00:05:06.663 02:07:06 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:06.663 02:07:06 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.663 02:07:06 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:06.663 02:07:06 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:06.663 02:07:06 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:06.663 02:07:06 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:06.663 02:07:06 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:06.663 02:07:06 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:06.663 02:07:06 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:06.663 02:07:06 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:06.663 02:07:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:06.663 No valid GPT data, bailing 00:05:06.663 02:07:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:06.663 02:07:06 -- scripts/common.sh@393 -- # pt= 00:05:06.663 02:07:06 -- scripts/common.sh@394 -- # return 1 00:05:06.663 02:07:06 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:06.663 02:07:06 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:06.663 02:07:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:06.663 02:07:06 -- setup/common.sh@80 -- # echo 4294967296 00:05:06.663 02:07:06 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:06.663 02:07:06 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.663 02:07:06 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:06.663 02:07:06 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:06.663 02:07:06 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:06.663 02:07:06 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:06.663 02:07:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.663 02:07:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.663 02:07:06 -- common/autotest_common.sh@10 -- # set +x 00:05:06.663 ************************************ 00:05:06.663 START TEST nvme_mount 00:05:06.663 ************************************ 00:05:06.663 02:07:06 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:06.663 02:07:06 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:06.663 02:07:06 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:06.663 02:07:06 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.663 02:07:06 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.663 02:07:06 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:06.663 02:07:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.663 02:07:06 -- setup/common.sh@40 -- # local part_no=1 00:05:06.663 02:07:06 -- setup/common.sh@41 -- # local size=1073741824 00:05:06.663 02:07:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.663 02:07:06 -- setup/common.sh@44 -- # parts=() 00:05:06.663 02:07:06 -- setup/common.sh@44 -- # local parts 00:05:06.663 02:07:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.663 02:07:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.663 02:07:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.663 02:07:06 -- setup/common.sh@46 -- # (( part++ )) 00:05:06.663 02:07:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.663 02:07:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:06.663 02:07:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.663 02:07:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:08.039 Creating new GPT entries in memory. 00:05:08.039 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.039 other utilities. 00:05:08.039 02:07:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.039 02:07:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.039 02:07:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.039 02:07:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.039 02:07:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:08.973 Creating new GPT entries in memory. 00:05:08.973 The operation has completed successfully. 00:05:08.973 02:07:08 -- setup/common.sh@57 -- # (( part++ )) 00:05:08.973 02:07:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.973 02:07:08 -- setup/common.sh@62 -- # wait 65472 00:05:08.973 02:07:08 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.973 02:07:08 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:08.973 02:07:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.973 02:07:08 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:08.973 02:07:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:08.973 02:07:08 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.973 02:07:08 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.973 02:07:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.973 02:07:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:08.973 02:07:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.973 02:07:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.973 02:07:08 -- setup/devices.sh@53 -- # local found=0 00:05:08.973 02:07:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.973 02:07:08 -- setup/devices.sh@56 -- # : 00:05:08.973 02:07:08 -- setup/devices.sh@59 -- # local pci status 00:05:08.973 02:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.973 02:07:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.973 02:07:08 -- setup/devices.sh@47 -- # setup output config 00:05:08.973 02:07:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.973 02:07:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.973 02:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.973 02:07:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:08.973 02:07:08 -- setup/devices.sh@63 -- # found=1 00:05:08.973 02:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.973 02:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.973 02:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.539 02:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.539 02:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.539 02:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.539 02:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.539 02:07:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.539 02:07:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:09.539 02:07:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.539 02:07:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.539 02:07:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.539 02:07:08 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:09.539 02:07:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.539 02:07:08 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.539 02:07:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.539 02:07:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.539 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.539 02:07:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.539 02:07:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.797 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.797 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.797 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.797 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.797 02:07:09 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:09.797 02:07:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:09.797 02:07:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.797 02:07:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:09.797 02:07:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:09.797 02:07:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.797 02:07:09 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.797 02:07:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.797 02:07:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:09.797 02:07:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.797 02:07:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.797 02:07:09 -- setup/devices.sh@53 -- # local found=0 00:05:09.797 02:07:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.797 02:07:09 -- setup/devices.sh@56 -- # : 00:05:09.797 02:07:09 -- setup/devices.sh@59 -- # local pci status 00:05:09.797 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.797 02:07:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.797 02:07:09 -- setup/devices.sh@47 -- # setup output config 00:05:09.797 02:07:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.797 02:07:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.056 02:07:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.056 02:07:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:10.056 02:07:09 -- setup/devices.sh@63 -- # found=1 00:05:10.056 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.056 02:07:09 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.056 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.314 02:07:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.314 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.314 02:07:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.314 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.572 02:07:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.572 02:07:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:10.572 02:07:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.572 02:07:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:10.572 02:07:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:10.572 02:07:09 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.572 02:07:09 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:10.572 02:07:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:10.572 02:07:09 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:10.572 02:07:09 -- setup/devices.sh@50 -- # local mount_point= 00:05:10.572 02:07:09 -- setup/devices.sh@51 -- # local test_file= 00:05:10.572 02:07:09 -- setup/devices.sh@53 -- # local found=0 00:05:10.572 02:07:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:10.572 02:07:09 -- setup/devices.sh@59 -- # local pci status 00:05:10.572 02:07:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.572 02:07:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:10.572 02:07:09 -- setup/devices.sh@47 -- # setup output config 00:05:10.572 02:07:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.572 02:07:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:10.831 02:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.831 02:07:10 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:10.831 02:07:10 -- setup/devices.sh@63 -- # found=1 00:05:10.831 02:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.831 02:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:10.831 02:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.088 02:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:11.088 02:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.088 02:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:11.088 02:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.347 02:07:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.347 02:07:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:11.347 02:07:10 -- setup/devices.sh@68 -- # return 0 00:05:11.347 02:07:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:11.347 02:07:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.347 02:07:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.347 02:07:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.347 02:07:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.347 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.347 00:05:11.347 real 0m4.459s 00:05:11.347 user 0m0.979s 00:05:11.347 sys 0m1.173s 00:05:11.347 02:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.347 02:07:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 END TEST nvme_mount 00:05:11.347 ************************************ 00:05:11.347 02:07:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:11.347 02:07:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.347 02:07:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.347 02:07:10 -- common/autotest_common.sh@10 -- # set +x 00:05:11.347 ************************************ 00:05:11.347 START TEST dm_mount 00:05:11.347 ************************************ 00:05:11.347 02:07:10 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:11.347 02:07:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:11.347 02:07:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:11.347 02:07:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:11.347 02:07:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:11.347 02:07:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.347 02:07:10 -- setup/common.sh@40 -- # local part_no=2 00:05:11.347 02:07:10 -- setup/common.sh@41 -- # local size=1073741824 00:05:11.347 02:07:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.347 02:07:10 -- setup/common.sh@44 -- # parts=() 00:05:11.347 02:07:10 -- setup/common.sh@44 -- # local parts 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.347 02:07:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part++ )) 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.347 02:07:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part++ )) 00:05:11.347 02:07:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.347 02:07:10 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:11.347 02:07:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.347 02:07:10 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:12.279 Creating new GPT entries in memory. 00:05:12.279 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.279 other utilities. 00:05:12.279 02:07:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.279 02:07:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.279 02:07:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.279 02:07:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.279 02:07:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:13.210 Creating new GPT entries in memory. 00:05:13.210 The operation has completed successfully. 00:05:13.210 02:07:12 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.210 02:07:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.210 02:07:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:13.210 02:07:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:13.210 02:07:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:14.582 The operation has completed successfully. 00:05:14.582 02:07:13 -- setup/common.sh@57 -- # (( part++ )) 00:05:14.582 02:07:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.582 02:07:13 -- setup/common.sh@62 -- # wait 65931 00:05:14.582 02:07:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:14.582 02:07:13 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.582 02:07:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.582 02:07:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:14.582 02:07:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:14.582 02:07:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.582 02:07:13 -- setup/devices.sh@161 -- # break 00:05:14.582 02:07:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.582 02:07:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:14.582 02:07:13 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:14.582 02:07:13 -- setup/devices.sh@166 -- # dm=dm-0 00:05:14.582 02:07:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:14.582 02:07:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:14.582 02:07:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.582 02:07:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:14.582 02:07:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.582 02:07:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:14.582 02:07:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:14.582 02:07:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.582 02:07:13 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.582 02:07:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:14.582 02:07:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:14.582 02:07:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.582 02:07:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.582 02:07:13 -- setup/devices.sh@53 -- # local found=0 00:05:14.582 02:07:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.582 02:07:13 -- setup/devices.sh@56 -- # : 00:05:14.582 02:07:13 -- setup/devices.sh@59 -- # local pci status 00:05:14.582 02:07:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.582 02:07:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:14.582 02:07:13 -- setup/devices.sh@47 -- # setup output config 00:05:14.582 02:07:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.582 02:07:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.582 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.582 02:07:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.582 02:07:14 -- setup/devices.sh@63 -- # found=1 00:05:14.582 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.582 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.582 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.840 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:14.840 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.098 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.098 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.098 02:07:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.098 02:07:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:15.098 02:07:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:15.098 02:07:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:15.098 02:07:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:15.098 02:07:14 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:15.098 02:07:14 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:15.098 02:07:14 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:15.098 02:07:14 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:15.098 02:07:14 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.098 02:07:14 -- setup/devices.sh@51 -- # local test_file= 00:05:15.098 02:07:14 -- setup/devices.sh@53 -- # local found=0 00:05:15.098 02:07:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.098 02:07:14 -- setup/devices.sh@59 -- # local pci status 00:05:15.098 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.098 02:07:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:15.098 02:07:14 -- setup/devices.sh@47 -- # setup output config 00:05:15.098 02:07:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.098 02:07:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.356 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.356 02:07:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:15.356 02:07:14 -- setup/devices.sh@63 -- # found=1 00:05:15.356 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.356 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.356 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.614 02:07:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.614 02:07:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.614 02:07:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:15.614 02:07:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.614 02:07:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.614 02:07:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.614 02:07:15 -- setup/devices.sh@68 -- # return 0 00:05:15.614 02:07:15 -- setup/devices.sh@187 -- # cleanup_dm 00:05:15.614 02:07:15 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:15.614 02:07:15 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.614 02:07:15 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:15.614 02:07:15 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.614 02:07:15 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:15.872 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.872 02:07:15 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.872 02:07:15 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:15.872 00:05:15.872 real 0m4.465s 00:05:15.872 user 0m0.634s 00:05:15.872 sys 0m0.769s 00:05:15.872 ************************************ 00:05:15.872 END TEST dm_mount 00:05:15.872 ************************************ 00:05:15.872 02:07:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.872 02:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:15.872 02:07:15 -- setup/devices.sh@1 -- # cleanup 00:05:15.872 02:07:15 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:15.872 02:07:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.872 02:07:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.872 02:07:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.872 02:07:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.872 02:07:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.130 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.130 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.130 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.130 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.130 02:07:15 -- setup/devices.sh@12 -- # cleanup_dm 00:05:16.130 02:07:15 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.130 02:07:15 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.130 02:07:15 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.130 02:07:15 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.130 02:07:15 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.130 02:07:15 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:16.130 00:05:16.130 real 0m10.466s 00:05:16.130 user 0m2.244s 00:05:16.130 sys 0m2.557s 00:05:16.130 02:07:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.130 ************************************ 00:05:16.130 END TEST devices 00:05:16.130 ************************************ 00:05:16.130 02:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.130 00:05:16.130 real 0m21.762s 00:05:16.130 user 0m7.208s 00:05:16.130 sys 0m8.962s 00:05:16.130 02:07:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.130 02:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:16.130 ************************************ 00:05:16.130 END TEST setup.sh 00:05:16.130 ************************************ 00:05:16.130 02:07:15 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:16.388 Hugepages 00:05:16.388 node hugesize free / total 00:05:16.388 node0 1048576kB 0 / 0 00:05:16.388 node0 2048kB 2048 / 2048 00:05:16.388 00:05:16.388 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.388 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:16.388 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:16.645 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:16.645 02:07:15 -- spdk/autotest.sh@141 -- # uname -s 00:05:16.645 02:07:15 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:16.645 02:07:15 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:16.645 02:07:15 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.212 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.470 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.470 02:07:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:18.405 02:07:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:18.405 02:07:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:18.405 02:07:17 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.405 02:07:17 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:18.405 02:07:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:18.406 02:07:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:18.406 02:07:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.406 02:07:17 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:18.406 02:07:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:18.406 02:07:17 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:18.406 02:07:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:18.406 02:07:17 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.975 Waiting for block devices as requested 00:05:18.975 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.975 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.975 02:07:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:18.975 02:07:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:18.975 02:07:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.975 02:07:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:18.975 02:07:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:18.975 02:07:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:18.975 02:07:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:18.975 02:07:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:18.975 02:07:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:18.975 02:07:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:18.975 02:07:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:18.975 02:07:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:18.975 02:07:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:18.975 02:07:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:18.975 02:07:18 -- common/autotest_common.sh@1542 -- # continue 00:05:18.975 02:07:18 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:18.975 02:07:18 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:18.975 02:07:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:18.975 02:07:18 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:19.234 02:07:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:19.234 02:07:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:19.234 02:07:18 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:19.234 02:07:18 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:19.234 02:07:18 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:19.234 02:07:18 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:19.234 02:07:18 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:19.234 02:07:18 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:19.234 02:07:18 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:19.234 02:07:18 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:19.234 02:07:18 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:19.234 02:07:18 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:19.234 02:07:18 -- common/autotest_common.sh@1542 -- # continue 00:05:19.234 02:07:18 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:19.234 02:07:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:19.234 02:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:19.234 02:07:18 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:19.234 02:07:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:19.234 02:07:18 -- common/autotest_common.sh@10 -- # set +x 00:05:19.234 02:07:18 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.802 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.060 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.060 02:07:19 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:20.060 02:07:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:20.060 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.060 02:07:19 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:20.060 02:07:19 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:20.060 02:07:19 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.060 02:07:19 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:20.060 02:07:19 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:20.060 02:07:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:20.060 02:07:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:20.060 02:07:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:20.060 02:07:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.060 02:07:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.060 02:07:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:20.060 02:07:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:20.060 02:07:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:20.060 02:07:19 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:20.060 02:07:19 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:20.060 02:07:19 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:20.060 02:07:19 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.060 02:07:19 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:20.060 02:07:19 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:20.060 02:07:19 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:20.060 02:07:19 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:20.060 02:07:19 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:20.060 02:07:19 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:20.060 02:07:19 -- common/autotest_common.sh@1578 -- # return 0 00:05:20.060 02:07:19 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:20.060 02:07:19 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:20.060 02:07:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:20.060 02:07:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:20.060 02:07:19 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:20.060 02:07:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.060 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.060 02:07:19 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.060 02:07:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.060 02:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.060 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.060 ************************************ 00:05:20.060 START TEST env 00:05:20.060 ************************************ 00:05:20.060 02:07:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:20.319 * Looking for test storage... 00:05:20.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:20.319 02:07:19 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.319 02:07:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.319 02:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.319 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 START TEST env_memory 00:05:20.319 ************************************ 00:05:20.319 02:07:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:20.319 00:05:20.319 00:05:20.319 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.319 http://cunit.sourceforge.net/ 00:05:20.319 00:05:20.319 00:05:20.319 Suite: memory 00:05:20.319 Test: alloc and free memory map ...[2024-07-15 02:07:19.740396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.319 passed 00:05:20.319 Test: mem map translation ...[2024-07-15 02:07:19.771433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.319 [2024-07-15 02:07:19.771489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.319 [2024-07-15 02:07:19.771557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.319 [2024-07-15 02:07:19.771569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.319 passed 00:05:20.319 Test: mem map registration ...[2024-07-15 02:07:19.835368] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.319 [2024-07-15 02:07:19.835425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.319 passed 00:05:20.596 Test: mem map adjacent registrations ...passed 00:05:20.596 00:05:20.596 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.596 suites 1 1 n/a 0 0 00:05:20.596 tests 4 4 4 0 0 00:05:20.596 asserts 152 152 152 0 n/a 00:05:20.596 00:05:20.596 Elapsed time = 0.214 seconds 00:05:20.596 00:05:20.596 real 0m0.230s 00:05:20.596 user 0m0.214s 00:05:20.596 sys 0m0.013s 00:05:20.596 02:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.596 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.596 ************************************ 00:05:20.596 END TEST env_memory 00:05:20.596 ************************************ 00:05:20.596 02:07:19 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.596 02:07:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.596 02:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.596 02:07:19 -- common/autotest_common.sh@10 -- # set +x 00:05:20.596 ************************************ 00:05:20.596 START TEST env_vtophys 00:05:20.596 ************************************ 00:05:20.596 02:07:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:20.596 EAL: lib.eal log level changed from notice to debug 00:05:20.596 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 1 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 2 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 3 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 4 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 5 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 6 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 7 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 8 as core 0 on socket 0 00:05:20.596 EAL: Detected lcore 9 as core 0 on socket 0 00:05:20.596 EAL: Maximum logical cores by configuration: 128 00:05:20.596 EAL: Detected CPU lcores: 10 00:05:20.596 EAL: Detected NUMA nodes: 1 00:05:20.596 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:20.596 EAL: Detected shared linkage of DPDK 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:20.596 EAL: Registered [vdev] bus. 00:05:20.596 EAL: bus.vdev log level changed from disabled to notice 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:20.596 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:20.596 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:20.596 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:20.596 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.596 EAL: No shared files mode enabled, IPC is disabled 00:05:20.596 EAL: Selected IOVA mode 'PA' 00:05:20.596 EAL: Probing VFIO support... 00:05:20.596 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.596 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:20.596 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.596 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.596 EAL: Setting up physically contiguous memory... 00:05:20.596 EAL: Setting maximum number of open files to 524288 00:05:20.596 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.596 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.596 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.596 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.596 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.596 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.596 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.596 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.596 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.596 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.596 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.596 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.596 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.596 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.596 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.596 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.596 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.596 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.596 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.596 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.596 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.596 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.596 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.596 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.596 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.596 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.597 EAL: Hugepages will be freed exactly as allocated. 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: TSC frequency is ~2200000 KHz 00:05:20.597 EAL: Main lcore 0 is ready (tid=7f1726e64a00;cpuset=[0]) 00:05:20.597 EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.597 EAL: Restoring previous memory policy: 0 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.597 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.597 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.597 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:20.597 00:05:20.597 00:05:20.597 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.597 http://cunit.sourceforge.net/ 00:05:20.597 00:05:20.597 00:05:20.597 Suite: components_suite 00:05:20.597 Test: vtophys_malloc_test ...passed 00:05:20.597 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.597 EAL: Restoring previous memory policy: 4 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.597 EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.597 EAL: Restoring previous memory policy: 4 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.597 EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.597 EAL: Restoring previous memory policy: 4 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.597 EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.597 EAL: Restoring previous memory policy: 4 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.597 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.597 EAL: request: mp_malloc_sync 00:05:20.597 EAL: No shared files mode enabled, IPC is disabled 00:05:20.597 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.597 EAL: Trying to obtain current memory policy. 00:05:20.597 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.855 EAL: Restoring previous memory policy: 4 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.855 EAL: Trying to obtain current memory policy. 00:05:20.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.855 EAL: Restoring previous memory policy: 4 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.855 EAL: Trying to obtain current memory policy. 00:05:20.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.855 EAL: Restoring previous memory policy: 4 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.855 EAL: Trying to obtain current memory policy. 00:05:20.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.855 EAL: Restoring previous memory policy: 4 00:05:20.855 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.855 EAL: request: mp_malloc_sync 00:05:20.855 EAL: No shared files mode enabled, IPC is disabled 00:05:20.855 EAL: Heap on socket 0 was expanded by 258MB 00:05:21.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.113 EAL: request: mp_malloc_sync 00:05:21.113 EAL: No shared files mode enabled, IPC is disabled 00:05:21.113 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.113 EAL: Trying to obtain current memory policy. 00:05:21.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.113 EAL: Restoring previous memory policy: 4 00:05:21.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.113 EAL: request: mp_malloc_sync 00:05:21.113 EAL: No shared files mode enabled, IPC is disabled 00:05:21.113 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.371 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.371 EAL: request: mp_malloc_sync 00:05:21.371 EAL: No shared files mode enabled, IPC is disabled 00:05:21.371 EAL: Heap on socket 0 was shrunk by 514MB 00:05:21.371 EAL: Trying to obtain current memory policy. 00:05:21.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.629 EAL: Restoring previous memory policy: 4 00:05:21.629 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.629 EAL: request: mp_malloc_sync 00:05:21.629 EAL: No shared files mode enabled, IPC is disabled 00:05:21.629 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.887 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.146 passed 00:05:22.146 00:05:22.146 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.146 suites 1 1 n/a 0 0 00:05:22.146 tests 2 2 2 0 0 00:05:22.146 asserts 5218 5218 5218 0 n/a 00:05:22.146 00:05:22.146 Elapsed time = 1.277 seconds 00:05:22.146 EAL: request: mp_malloc_sync 00:05:22.146 EAL: No shared files mode enabled, IPC is disabled 00:05:22.146 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.146 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.146 EAL: request: mp_malloc_sync 00:05:22.146 EAL: No shared files mode enabled, IPC is disabled 00:05:22.146 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.146 EAL: No shared files mode enabled, IPC is disabled 00:05:22.146 EAL: No shared files mode enabled, IPC is disabled 00:05:22.146 EAL: No shared files mode enabled, IPC is disabled 00:05:22.146 00:05:22.146 real 0m1.476s 00:05:22.146 user 0m0.804s 00:05:22.146 sys 0m0.535s 00:05:22.146 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.146 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.146 ************************************ 00:05:22.146 END TEST env_vtophys 00:05:22.146 ************************************ 00:05:22.146 02:07:21 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.146 02:07:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.146 02:07:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.146 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.146 ************************************ 00:05:22.146 START TEST env_pci 00:05:22.146 ************************************ 00:05:22.146 02:07:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:22.146 00:05:22.146 00:05:22.147 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.147 http://cunit.sourceforge.net/ 00:05:22.147 00:05:22.147 00:05:22.147 Suite: pci 00:05:22.147 Test: pci_hook ...[2024-07-15 02:07:21.519535] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67062 has claimed it 00:05:22.147 passed 00:05:22.147 00:05:22.147 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.147 suites 1 1 n/a 0 0 00:05:22.147 tests 1 1 1 0 0 00:05:22.147 asserts 25 25 25 0 n/a 00:05:22.147 00:05:22.147 Elapsed time = 0.002 seconds 00:05:22.147 EAL: Cannot find device (10000:00:01.0) 00:05:22.147 EAL: Failed to attach device on primary process 00:05:22.147 00:05:22.147 real 0m0.018s 00:05:22.147 user 0m0.009s 00:05:22.147 sys 0m0.009s 00:05:22.147 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.147 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.147 ************************************ 00:05:22.147 END TEST env_pci 00:05:22.147 ************************************ 00:05:22.147 02:07:21 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.147 02:07:21 -- env/env.sh@15 -- # uname 00:05:22.147 02:07:21 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.147 02:07:21 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.147 02:07:21 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.147 02:07:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:22.147 02:07:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.147 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.147 ************************************ 00:05:22.147 START TEST env_dpdk_post_init 00:05:22.147 ************************************ 00:05:22.147 02:07:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.147 EAL: Detected CPU lcores: 10 00:05:22.147 EAL: Detected NUMA nodes: 1 00:05:22.147 EAL: Detected shared linkage of DPDK 00:05:22.147 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.147 EAL: Selected IOVA mode 'PA' 00:05:22.406 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.406 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:22.406 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:22.406 Starting DPDK initialization... 00:05:22.406 Starting SPDK post initialization... 00:05:22.406 SPDK NVMe probe 00:05:22.406 Attaching to 0000:00:06.0 00:05:22.406 Attaching to 0000:00:07.0 00:05:22.406 Attached to 0000:00:06.0 00:05:22.406 Attached to 0000:00:07.0 00:05:22.406 Cleaning up... 00:05:22.406 00:05:22.406 real 0m0.178s 00:05:22.406 user 0m0.038s 00:05:22.406 sys 0m0.038s 00:05:22.406 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.406 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 ************************************ 00:05:22.406 END TEST env_dpdk_post_init 00:05:22.406 ************************************ 00:05:22.406 02:07:21 -- env/env.sh@26 -- # uname 00:05:22.406 02:07:21 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.406 02:07:21 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.406 02:07:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.406 02:07:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.406 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 ************************************ 00:05:22.406 START TEST env_mem_callbacks 00:05:22.406 ************************************ 00:05:22.406 02:07:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.406 EAL: Detected CPU lcores: 10 00:05:22.406 EAL: Detected NUMA nodes: 1 00:05:22.406 EAL: Detected shared linkage of DPDK 00:05:22.406 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.406 EAL: Selected IOVA mode 'PA' 00:05:22.406 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.406 00:05:22.406 00:05:22.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.406 http://cunit.sourceforge.net/ 00:05:22.406 00:05:22.406 00:05:22.406 Suite: memory 00:05:22.406 Test: test ... 00:05:22.406 register 0x200000200000 2097152 00:05:22.406 malloc 3145728 00:05:22.406 register 0x200000400000 4194304 00:05:22.406 buf 0x200000500000 len 3145728 PASSED 00:05:22.406 malloc 64 00:05:22.406 buf 0x2000004fff40 len 64 PASSED 00:05:22.406 malloc 4194304 00:05:22.406 register 0x200000800000 6291456 00:05:22.406 buf 0x200000a00000 len 4194304 PASSED 00:05:22.406 free 0x200000500000 3145728 00:05:22.406 free 0x2000004fff40 64 00:05:22.406 unregister 0x200000400000 4194304 PASSED 00:05:22.406 free 0x200000a00000 4194304 00:05:22.406 unregister 0x200000800000 6291456 PASSED 00:05:22.406 malloc 8388608 00:05:22.406 register 0x200000400000 10485760 00:05:22.406 buf 0x200000600000 len 8388608 PASSED 00:05:22.406 free 0x200000600000 8388608 00:05:22.406 unregister 0x200000400000 10485760 PASSED 00:05:22.406 passed 00:05:22.406 00:05:22.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.406 suites 1 1 n/a 0 0 00:05:22.406 tests 1 1 1 0 0 00:05:22.406 asserts 15 15 15 0 n/a 00:05:22.406 00:05:22.406 Elapsed time = 0.007 seconds 00:05:22.406 00:05:22.406 real 0m0.133s 00:05:22.406 user 0m0.015s 00:05:22.406 sys 0m0.017s 00:05:22.406 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.406 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.406 ************************************ 00:05:22.406 END TEST env_mem_callbacks 00:05:22.406 ************************************ 00:05:22.666 00:05:22.666 real 0m2.382s 00:05:22.666 user 0m1.190s 00:05:22.666 sys 0m0.824s 00:05:22.666 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.666 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 ************************************ 00:05:22.666 END TEST env 00:05:22.666 ************************************ 00:05:22.666 02:07:22 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.666 02:07:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.666 02:07:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.666 02:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 ************************************ 00:05:22.666 START TEST rpc 00:05:22.666 ************************************ 00:05:22.666 02:07:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:22.666 * Looking for test storage... 00:05:22.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:22.666 02:07:22 -- rpc/rpc.sh@65 -- # spdk_pid=67170 00:05:22.666 02:07:22 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:22.666 02:07:22 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.666 02:07:22 -- rpc/rpc.sh@67 -- # waitforlisten 67170 00:05:22.666 02:07:22 -- common/autotest_common.sh@819 -- # '[' -z 67170 ']' 00:05:22.666 02:07:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.666 02:07:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.666 02:07:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.666 02:07:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.666 02:07:22 -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 [2024-07-15 02:07:22.178898] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:22.666 [2024-07-15 02:07:22.179026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67170 ] 00:05:22.924 [2024-07-15 02:07:22.313835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.924 [2024-07-15 02:07:22.398315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.924 [2024-07-15 02:07:22.398480] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:22.924 [2024-07-15 02:07:22.398494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67170' to capture a snapshot of events at runtime. 00:05:22.924 [2024-07-15 02:07:22.398504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67170 for offline analysis/debug. 00:05:22.924 [2024-07-15 02:07:22.398531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.872 02:07:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.872 02:07:23 -- common/autotest_common.sh@852 -- # return 0 00:05:23.872 02:07:23 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.872 02:07:23 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:23.872 02:07:23 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.872 02:07:23 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.872 02:07:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.872 02:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 ************************************ 00:05:23.872 START TEST rpc_integrity 00:05:23.872 ************************************ 00:05:23.872 02:07:23 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:23.872 02:07:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.872 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.872 02:07:23 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.872 02:07:23 -- rpc/rpc.sh@13 -- # jq length 00:05:23.872 02:07:23 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.872 02:07:23 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.872 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.872 02:07:23 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:23.872 02:07:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.872 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.872 02:07:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.872 { 00:05:23.872 "aliases": [ 00:05:23.872 "86d81b71-7ea1-45fc-8ed5-426c08f85ba7" 00:05:23.872 ], 00:05:23.872 "assigned_rate_limits": { 00:05:23.872 "r_mbytes_per_sec": 0, 00:05:23.872 "rw_ios_per_sec": 0, 00:05:23.872 "rw_mbytes_per_sec": 0, 00:05:23.872 "w_mbytes_per_sec": 0 00:05:23.872 }, 00:05:23.872 "block_size": 512, 00:05:23.872 "claimed": false, 00:05:23.872 "driver_specific": {}, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "name": "Malloc0", 00:05:23.872 "num_blocks": 16384, 00:05:23.872 "product_name": "Malloc disk", 00:05:23.872 "supported_io_types": { 00:05:23.872 "abort": true, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "flush": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "read": true, 00:05:23.872 "reset": true, 00:05:23.872 "unmap": true, 00:05:23.872 "write": true, 00:05:23.872 "write_zeroes": true 00:05:23.872 }, 00:05:23.872 "uuid": "86d81b71-7ea1-45fc-8ed5-426c08f85ba7", 00:05:23.872 "zoned": false 00:05:23.872 } 00:05:23.872 ]' 00:05:23.872 02:07:23 -- rpc/rpc.sh@17 -- # jq length 00:05:23.872 02:07:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.872 02:07:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:23.872 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 [2024-07-15 02:07:23.336681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:23.872 [2024-07-15 02:07:23.336769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.872 [2024-07-15 02:07:23.336790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ecb4d0 00:05:23.872 [2024-07-15 02:07:23.336800] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.872 [2024-07-15 02:07:23.338581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.872 [2024-07-15 02:07:23.338657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.872 Passthru0 00:05:23.872 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.872 02:07:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.872 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.872 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:23.872 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.872 02:07:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.872 { 00:05:23.872 "aliases": [ 00:05:23.872 "86d81b71-7ea1-45fc-8ed5-426c08f85ba7" 00:05:23.872 ], 00:05:23.872 "assigned_rate_limits": { 00:05:23.872 "r_mbytes_per_sec": 0, 00:05:23.872 "rw_ios_per_sec": 0, 00:05:23.872 "rw_mbytes_per_sec": 0, 00:05:23.872 "w_mbytes_per_sec": 0 00:05:23.872 }, 00:05:23.872 "block_size": 512, 00:05:23.872 "claim_type": "exclusive_write", 00:05:23.872 "claimed": true, 00:05:23.872 "driver_specific": {}, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "name": "Malloc0", 00:05:23.872 "num_blocks": 16384, 00:05:23.872 "product_name": "Malloc disk", 00:05:23.872 "supported_io_types": { 00:05:23.872 "abort": true, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "flush": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "read": true, 00:05:23.872 "reset": true, 00:05:23.872 "unmap": true, 00:05:23.872 "write": true, 00:05:23.872 "write_zeroes": true 00:05:23.872 }, 00:05:23.872 "uuid": "86d81b71-7ea1-45fc-8ed5-426c08f85ba7", 00:05:23.872 "zoned": false 00:05:23.872 }, 00:05:23.872 { 00:05:23.872 "aliases": [ 00:05:23.872 "89a7d517-4a49-5d0d-93f9-d3c61ff35e8a" 00:05:23.872 ], 00:05:23.872 "assigned_rate_limits": { 00:05:23.872 "r_mbytes_per_sec": 0, 00:05:23.872 "rw_ios_per_sec": 0, 00:05:23.872 "rw_mbytes_per_sec": 0, 00:05:23.872 "w_mbytes_per_sec": 0 00:05:23.872 }, 00:05:23.872 "block_size": 512, 00:05:23.872 "claimed": false, 00:05:23.872 "driver_specific": { 00:05:23.872 "passthru": { 00:05:23.872 "base_bdev_name": "Malloc0", 00:05:23.872 "name": "Passthru0" 00:05:23.872 } 00:05:23.872 }, 00:05:23.872 "memory_domains": [ 00:05:23.872 { 00:05:23.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.872 "dma_device_type": 2 00:05:23.872 } 00:05:23.872 ], 00:05:23.872 "name": "Passthru0", 00:05:23.872 "num_blocks": 16384, 00:05:23.872 "product_name": "passthru", 00:05:23.872 "supported_io_types": { 00:05:23.872 "abort": true, 00:05:23.872 "compare": false, 00:05:23.872 "compare_and_write": false, 00:05:23.872 "flush": true, 00:05:23.872 "nvme_admin": false, 00:05:23.872 "nvme_io": false, 00:05:23.872 "read": true, 00:05:23.872 "reset": true, 00:05:23.872 "unmap": true, 00:05:23.872 "write": true, 00:05:23.872 "write_zeroes": true 00:05:23.872 }, 00:05:23.873 "uuid": "89a7d517-4a49-5d0d-93f9-d3c61ff35e8a", 00:05:23.873 "zoned": false 00:05:23.873 } 00:05:23.873 ]' 00:05:23.873 02:07:23 -- rpc/rpc.sh@21 -- # jq length 00:05:23.873 02:07:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.873 02:07:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.873 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.873 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.135 02:07:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.135 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.135 02:07:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.135 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.135 02:07:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.135 02:07:23 -- rpc/rpc.sh@26 -- # jq length 00:05:24.135 02:07:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.135 00:05:24.135 real 0m0.338s 00:05:24.135 user 0m0.219s 00:05:24.135 sys 0m0.038s 00:05:24.135 02:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 ************************************ 00:05:24.135 END TEST rpc_integrity 00:05:24.135 ************************************ 00:05:24.135 02:07:23 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.135 02:07:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.135 02:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 ************************************ 00:05:24.135 START TEST rpc_plugins 00:05:24.135 ************************************ 00:05:24.135 02:07:23 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:24.135 02:07:23 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.135 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.135 02:07:23 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.135 02:07:23 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.135 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.135 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.135 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.135 02:07:23 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.135 { 00:05:24.135 "aliases": [ 00:05:24.135 "bd71f88c-c480-49f9-821e-3c1538b83b2b" 00:05:24.135 ], 00:05:24.135 "assigned_rate_limits": { 00:05:24.135 "r_mbytes_per_sec": 0, 00:05:24.135 "rw_ios_per_sec": 0, 00:05:24.135 "rw_mbytes_per_sec": 0, 00:05:24.135 "w_mbytes_per_sec": 0 00:05:24.135 }, 00:05:24.135 "block_size": 4096, 00:05:24.135 "claimed": false, 00:05:24.135 "driver_specific": {}, 00:05:24.135 "memory_domains": [ 00:05:24.135 { 00:05:24.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.135 "dma_device_type": 2 00:05:24.135 } 00:05:24.135 ], 00:05:24.135 "name": "Malloc1", 00:05:24.135 "num_blocks": 256, 00:05:24.135 "product_name": "Malloc disk", 00:05:24.135 "supported_io_types": { 00:05:24.135 "abort": true, 00:05:24.135 "compare": false, 00:05:24.135 "compare_and_write": false, 00:05:24.135 "flush": true, 00:05:24.135 "nvme_admin": false, 00:05:24.135 "nvme_io": false, 00:05:24.136 "read": true, 00:05:24.136 "reset": true, 00:05:24.136 "unmap": true, 00:05:24.136 "write": true, 00:05:24.136 "write_zeroes": true 00:05:24.136 }, 00:05:24.136 "uuid": "bd71f88c-c480-49f9-821e-3c1538b83b2b", 00:05:24.136 "zoned": false 00:05:24.136 } 00:05:24.136 ]' 00:05:24.136 02:07:23 -- rpc/rpc.sh@32 -- # jq length 00:05:24.136 02:07:23 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.136 02:07:23 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.136 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.136 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.136 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.136 02:07:23 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.136 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.136 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.136 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.136 02:07:23 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.136 02:07:23 -- rpc/rpc.sh@36 -- # jq length 00:05:24.394 02:07:23 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.394 00:05:24.394 real 0m0.169s 00:05:24.395 user 0m0.117s 00:05:24.395 sys 0m0.014s 00:05:24.395 02:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.395 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.395 ************************************ 00:05:24.395 END TEST rpc_plugins 00:05:24.395 ************************************ 00:05:24.395 02:07:23 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.395 02:07:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.395 02:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.395 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.395 ************************************ 00:05:24.395 START TEST rpc_trace_cmd_test 00:05:24.395 ************************************ 00:05:24.395 02:07:23 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:24.395 02:07:23 -- rpc/rpc.sh@40 -- # local info 00:05:24.395 02:07:23 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.395 02:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.395 02:07:23 -- common/autotest_common.sh@10 -- # set +x 00:05:24.395 02:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.395 02:07:23 -- rpc/rpc.sh@42 -- # info='{ 00:05:24.395 "bdev": { 00:05:24.395 "mask": "0x8", 00:05:24.395 "tpoint_mask": "0xffffffffffffffff" 00:05:24.395 }, 00:05:24.395 "bdev_nvme": { 00:05:24.395 "mask": "0x4000", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "blobfs": { 00:05:24.395 "mask": "0x80", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "dsa": { 00:05:24.395 "mask": "0x200", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "ftl": { 00:05:24.395 "mask": "0x40", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "iaa": { 00:05:24.395 "mask": "0x1000", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "iscsi_conn": { 00:05:24.395 "mask": "0x2", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "nvme_pcie": { 00:05:24.395 "mask": "0x800", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "nvme_tcp": { 00:05:24.395 "mask": "0x2000", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "nvmf_rdma": { 00:05:24.395 "mask": "0x10", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "nvmf_tcp": { 00:05:24.395 "mask": "0x20", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "scsi": { 00:05:24.395 "mask": "0x4", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "thread": { 00:05:24.395 "mask": "0x400", 00:05:24.395 "tpoint_mask": "0x0" 00:05:24.395 }, 00:05:24.395 "tpoint_group_mask": "0x8", 00:05:24.395 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67170" 00:05:24.395 }' 00:05:24.395 02:07:23 -- rpc/rpc.sh@43 -- # jq length 00:05:24.395 02:07:23 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:24.395 02:07:23 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.395 02:07:23 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.395 02:07:23 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.653 02:07:23 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.653 02:07:23 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.653 02:07:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.653 02:07:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.653 02:07:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.653 00:05:24.653 real 0m0.285s 00:05:24.653 user 0m0.249s 00:05:24.653 sys 0m0.024s 00:05:24.653 02:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.653 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.653 ************************************ 00:05:24.653 END TEST rpc_trace_cmd_test 00:05:24.653 ************************************ 00:05:24.653 02:07:24 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:24.653 02:07:24 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:24.653 02:07:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.653 02:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.653 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.653 ************************************ 00:05:24.653 START TEST go_rpc 00:05:24.653 ************************************ 00:05:24.653 02:07:24 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:24.653 02:07:24 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.653 02:07:24 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:24.653 02:07:24 -- rpc/rpc.sh@52 -- # jq length 00:05:24.653 02:07:24 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:24.653 02:07:24 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.653 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.653 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.653 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.653 02:07:24 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:24.653 02:07:24 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.912 02:07:24 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["d4e4d809-e465-42d1-a626-11f551d9eb79"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"d4e4d809-e465-42d1-a626-11f551d9eb79","zoned":false}]' 00:05:24.912 02:07:24 -- rpc/rpc.sh@57 -- # jq length 00:05:24.912 02:07:24 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:24.912 02:07:24 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.912 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.912 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.912 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.912 02:07:24 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:24.912 02:07:24 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:24.912 02:07:24 -- rpc/rpc.sh@61 -- # jq length 00:05:24.912 02:07:24 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:24.912 00:05:24.912 real 0m0.227s 00:05:24.912 user 0m0.158s 00:05:24.912 sys 0m0.033s 00:05:24.912 02:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.912 ************************************ 00:05:24.912 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.912 END TEST go_rpc 00:05:24.912 ************************************ 00:05:24.912 02:07:24 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:24.912 02:07:24 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:24.912 02:07:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.912 02:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.912 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.912 ************************************ 00:05:24.912 START TEST rpc_daemon_integrity 00:05:24.912 ************************************ 00:05:24.912 02:07:24 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:24.912 02:07:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.912 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.912 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.912 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.912 02:07:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.912 02:07:24 -- rpc/rpc.sh@13 -- # jq length 00:05:24.912 02:07:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.912 02:07:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.912 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.912 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.169 02:07:24 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:25.169 02:07:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.169 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.169 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.169 02:07:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.169 { 00:05:25.169 "aliases": [ 00:05:25.169 "50add631-bdf1-416f-9dc7-57bdb363cac2" 00:05:25.169 ], 00:05:25.169 "assigned_rate_limits": { 00:05:25.169 "r_mbytes_per_sec": 0, 00:05:25.169 "rw_ios_per_sec": 0, 00:05:25.169 "rw_mbytes_per_sec": 0, 00:05:25.169 "w_mbytes_per_sec": 0 00:05:25.169 }, 00:05:25.169 "block_size": 512, 00:05:25.169 "claimed": false, 00:05:25.169 "driver_specific": {}, 00:05:25.169 "memory_domains": [ 00:05:25.169 { 00:05:25.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.169 "dma_device_type": 2 00:05:25.169 } 00:05:25.169 ], 00:05:25.169 "name": "Malloc3", 00:05:25.169 "num_blocks": 16384, 00:05:25.169 "product_name": "Malloc disk", 00:05:25.169 "supported_io_types": { 00:05:25.169 "abort": true, 00:05:25.169 "compare": false, 00:05:25.169 "compare_and_write": false, 00:05:25.169 "flush": true, 00:05:25.169 "nvme_admin": false, 00:05:25.169 "nvme_io": false, 00:05:25.169 "read": true, 00:05:25.169 "reset": true, 00:05:25.169 "unmap": true, 00:05:25.169 "write": true, 00:05:25.169 "write_zeroes": true 00:05:25.169 }, 00:05:25.169 "uuid": "50add631-bdf1-416f-9dc7-57bdb363cac2", 00:05:25.169 "zoned": false 00:05:25.169 } 00:05:25.169 ]' 00:05:25.169 02:07:24 -- rpc/rpc.sh@17 -- # jq length 00:05:25.169 02:07:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.169 02:07:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:25.169 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.169 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 [2024-07-15 02:07:24.546156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.169 [2024-07-15 02:07:24.546234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.169 [2024-07-15 02:07:24.546251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ebd250 00:05:25.169 [2024-07-15 02:07:24.546260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.169 [2024-07-15 02:07:24.547716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.169 [2024-07-15 02:07:24.547765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.169 Passthru0 00:05:25.169 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.169 02:07:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.169 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.169 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.169 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.169 02:07:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.169 { 00:05:25.169 "aliases": [ 00:05:25.169 "50add631-bdf1-416f-9dc7-57bdb363cac2" 00:05:25.169 ], 00:05:25.169 "assigned_rate_limits": { 00:05:25.169 "r_mbytes_per_sec": 0, 00:05:25.169 "rw_ios_per_sec": 0, 00:05:25.169 "rw_mbytes_per_sec": 0, 00:05:25.169 "w_mbytes_per_sec": 0 00:05:25.169 }, 00:05:25.169 "block_size": 512, 00:05:25.169 "claim_type": "exclusive_write", 00:05:25.169 "claimed": true, 00:05:25.169 "driver_specific": {}, 00:05:25.169 "memory_domains": [ 00:05:25.169 { 00:05:25.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.169 "dma_device_type": 2 00:05:25.169 } 00:05:25.169 ], 00:05:25.169 "name": "Malloc3", 00:05:25.169 "num_blocks": 16384, 00:05:25.169 "product_name": "Malloc disk", 00:05:25.169 "supported_io_types": { 00:05:25.169 "abort": true, 00:05:25.169 "compare": false, 00:05:25.169 "compare_and_write": false, 00:05:25.169 "flush": true, 00:05:25.169 "nvme_admin": false, 00:05:25.169 "nvme_io": false, 00:05:25.169 "read": true, 00:05:25.170 "reset": true, 00:05:25.170 "unmap": true, 00:05:25.170 "write": true, 00:05:25.170 "write_zeroes": true 00:05:25.170 }, 00:05:25.170 "uuid": "50add631-bdf1-416f-9dc7-57bdb363cac2", 00:05:25.170 "zoned": false 00:05:25.170 }, 00:05:25.170 { 00:05:25.170 "aliases": [ 00:05:25.170 "32df8444-32b9-5e1d-b69a-432017a7dd09" 00:05:25.170 ], 00:05:25.170 "assigned_rate_limits": { 00:05:25.170 "r_mbytes_per_sec": 0, 00:05:25.170 "rw_ios_per_sec": 0, 00:05:25.170 "rw_mbytes_per_sec": 0, 00:05:25.170 "w_mbytes_per_sec": 0 00:05:25.170 }, 00:05:25.170 "block_size": 512, 00:05:25.170 "claimed": false, 00:05:25.170 "driver_specific": { 00:05:25.170 "passthru": { 00:05:25.170 "base_bdev_name": "Malloc3", 00:05:25.170 "name": "Passthru0" 00:05:25.170 } 00:05:25.170 }, 00:05:25.170 "memory_domains": [ 00:05:25.170 { 00:05:25.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.170 "dma_device_type": 2 00:05:25.170 } 00:05:25.170 ], 00:05:25.170 "name": "Passthru0", 00:05:25.170 "num_blocks": 16384, 00:05:25.170 "product_name": "passthru", 00:05:25.170 "supported_io_types": { 00:05:25.170 "abort": true, 00:05:25.170 "compare": false, 00:05:25.170 "compare_and_write": false, 00:05:25.170 "flush": true, 00:05:25.170 "nvme_admin": false, 00:05:25.170 "nvme_io": false, 00:05:25.170 "read": true, 00:05:25.170 "reset": true, 00:05:25.170 "unmap": true, 00:05:25.170 "write": true, 00:05:25.170 "write_zeroes": true 00:05:25.170 }, 00:05:25.170 "uuid": "32df8444-32b9-5e1d-b69a-432017a7dd09", 00:05:25.170 "zoned": false 00:05:25.170 } 00:05:25.170 ]' 00:05:25.170 02:07:24 -- rpc/rpc.sh@21 -- # jq length 00:05:25.170 02:07:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.170 02:07:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.170 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.170 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.170 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.170 02:07:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:25.170 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.170 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.170 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.170 02:07:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.170 02:07:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.170 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.170 02:07:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.170 02:07:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.170 02:07:24 -- rpc/rpc.sh@26 -- # jq length 00:05:25.170 02:07:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:25.170 00:05:25.170 real 0m0.319s 00:05:25.170 user 0m0.216s 00:05:25.170 sys 0m0.035s 00:05:25.170 02:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.170 02:07:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.170 ************************************ 00:05:25.170 END TEST rpc_daemon_integrity 00:05:25.170 ************************************ 00:05:25.427 02:07:24 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:25.427 02:07:24 -- rpc/rpc.sh@84 -- # killprocess 67170 00:05:25.427 02:07:24 -- common/autotest_common.sh@926 -- # '[' -z 67170 ']' 00:05:25.427 02:07:24 -- common/autotest_common.sh@930 -- # kill -0 67170 00:05:25.427 02:07:24 -- common/autotest_common.sh@931 -- # uname 00:05:25.427 02:07:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.427 02:07:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67170 00:05:25.427 02:07:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.427 02:07:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.427 killing process with pid 67170 00:05:25.427 02:07:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67170' 00:05:25.427 02:07:24 -- common/autotest_common.sh@945 -- # kill 67170 00:05:25.427 02:07:24 -- common/autotest_common.sh@950 -- # wait 67170 00:05:25.685 00:05:25.685 real 0m3.113s 00:05:25.685 user 0m4.140s 00:05:25.685 sys 0m0.746s 00:05:25.685 02:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.685 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.685 ************************************ 00:05:25.685 END TEST rpc 00:05:25.685 ************************************ 00:05:25.685 02:07:25 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.685 02:07:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.685 02:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.685 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.685 ************************************ 00:05:25.685 START TEST rpc_client 00:05:25.685 ************************************ 00:05:25.685 02:07:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:25.945 * Looking for test storage... 00:05:25.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:25.945 02:07:25 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:25.945 OK 00:05:25.945 02:07:25 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.945 ************************************ 00:05:25.945 END TEST rpc_client 00:05:25.945 ************************************ 00:05:25.945 00:05:25.945 real 0m0.098s 00:05:25.945 user 0m0.051s 00:05:25.945 sys 0m0.052s 00:05:25.945 02:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.945 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 02:07:25 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.945 02:07:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.945 02:07:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.945 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 ************************************ 00:05:25.945 START TEST json_config 00:05:25.945 ************************************ 00:05:25.945 02:07:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:25.945 02:07:25 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.945 02:07:25 -- nvmf/common.sh@7 -- # uname -s 00:05:25.945 02:07:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.945 02:07:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.945 02:07:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.945 02:07:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.945 02:07:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.945 02:07:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.945 02:07:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.945 02:07:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.945 02:07:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.945 02:07:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.945 02:07:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:05:25.945 02:07:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:05:25.945 02:07:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.945 02:07:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.945 02:07:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.945 02:07:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.945 02:07:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.945 02:07:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.945 02:07:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.945 02:07:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.945 02:07:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.945 02:07:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.945 02:07:25 -- paths/export.sh@5 -- # export PATH 00:05:25.945 02:07:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.945 02:07:25 -- nvmf/common.sh@46 -- # : 0 00:05:25.945 02:07:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:25.945 02:07:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:25.945 02:07:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:25.945 02:07:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.945 02:07:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.945 02:07:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:25.945 02:07:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:25.945 02:07:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:25.945 02:07:25 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:25.945 02:07:25 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:25.945 02:07:25 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:25.945 02:07:25 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:25.945 02:07:25 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:25.945 02:07:25 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:25.945 02:07:25 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:25.945 02:07:25 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:25.945 02:07:25 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:25.945 02:07:25 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:25.945 02:07:25 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:25.945 02:07:25 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:25.945 02:07:25 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:25.945 02:07:25 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.945 INFO: JSON configuration test init 00:05:25.945 02:07:25 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:25.945 02:07:25 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:25.945 02:07:25 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:25.945 02:07:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.945 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 02:07:25 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:25.945 02:07:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.945 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 02:07:25 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:25.946 02:07:25 -- json_config/json_config.sh@98 -- # local app=target 00:05:25.946 02:07:25 -- json_config/json_config.sh@99 -- # shift 00:05:25.946 02:07:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:25.946 02:07:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:25.946 02:07:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:25.946 02:07:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.946 02:07:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.946 02:07:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=67476 00:05:25.946 Waiting for target to run... 00:05:25.946 02:07:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:25.946 02:07:25 -- json_config/json_config.sh@114 -- # waitforlisten 67476 /var/tmp/spdk_tgt.sock 00:05:25.946 02:07:25 -- common/autotest_common.sh@819 -- # '[' -z 67476 ']' 00:05:25.946 02:07:25 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:25.946 02:07:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.946 02:07:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.946 02:07:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.946 02:07:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.946 02:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.946 [2024-07-15 02:07:25.481416] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:25.946 [2024-07-15 02:07:25.481530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67476 ] 00:05:26.513 [2024-07-15 02:07:25.916918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.513 [2024-07-15 02:07:25.974036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.513 [2024-07-15 02:07:25.974243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.086 02:07:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.086 02:07:26 -- common/autotest_common.sh@852 -- # return 0 00:05:27.086 00:05:27.086 02:07:26 -- json_config/json_config.sh@115 -- # echo '' 00:05:27.086 02:07:26 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:27.086 02:07:26 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:27.086 02:07:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.086 02:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.086 02:07:26 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:27.086 02:07:26 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:27.086 02:07:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.086 02:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.086 02:07:26 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:27.086 02:07:26 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:27.086 02:07:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:27.664 02:07:26 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:27.664 02:07:26 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:27.664 02:07:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.664 02:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:27.664 02:07:26 -- json_config/json_config.sh@48 -- # local ret=0 00:05:27.664 02:07:26 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:27.664 02:07:26 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:27.664 02:07:26 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:27.664 02:07:26 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:27.664 02:07:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:27.923 02:07:27 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:27.923 02:07:27 -- json_config/json_config.sh@51 -- # local get_types 00:05:27.923 02:07:27 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:27.923 02:07:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.923 02:07:27 -- common/autotest_common.sh@10 -- # set +x 00:05:27.923 02:07:27 -- json_config/json_config.sh@58 -- # return 0 00:05:27.923 02:07:27 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:27.923 02:07:27 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:27.923 02:07:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:27.923 02:07:27 -- common/autotest_common.sh@10 -- # set +x 00:05:27.923 02:07:27 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:27.923 02:07:27 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:27.923 02:07:27 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:27.923 02:07:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.182 MallocForNvmf0 00:05:28.182 02:07:27 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.182 02:07:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.440 MallocForNvmf1 00:05:28.440 02:07:27 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.440 02:07:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:28.698 [2024-07-15 02:07:28.021202] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.698 02:07:28 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.698 02:07:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:28.957 02:07:28 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:28.957 02:07:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.215 02:07:28 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.215 02:07:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.472 02:07:28 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.472 02:07:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.472 [2024-07-15 02:07:28.985864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.472 02:07:29 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:29.472 02:07:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.472 02:07:29 -- common/autotest_common.sh@10 -- # set +x 00:05:29.730 02:07:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:29.730 02:07:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.730 02:07:29 -- common/autotest_common.sh@10 -- # set +x 00:05:29.730 02:07:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:29.730 02:07:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.730 02:07:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:29.988 MallocBdevForConfigChangeCheck 00:05:29.988 02:07:29 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:29.988 02:07:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.988 02:07:29 -- common/autotest_common.sh@10 -- # set +x 00:05:29.988 02:07:29 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:29.988 02:07:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.245 INFO: shutting down applications... 00:05:30.245 02:07:29 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:30.245 02:07:29 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:30.245 02:07:29 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:30.245 02:07:29 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:30.245 02:07:29 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:30.809 Calling clear_iscsi_subsystem 00:05:30.809 Calling clear_nvmf_subsystem 00:05:30.809 Calling clear_nbd_subsystem 00:05:30.809 Calling clear_ublk_subsystem 00:05:30.809 Calling clear_vhost_blk_subsystem 00:05:30.809 Calling clear_vhost_scsi_subsystem 00:05:30.809 Calling clear_scheduler_subsystem 00:05:30.809 Calling clear_bdev_subsystem 00:05:30.809 Calling clear_accel_subsystem 00:05:30.809 Calling clear_vmd_subsystem 00:05:30.809 Calling clear_sock_subsystem 00:05:30.809 Calling clear_iobuf_subsystem 00:05:30.809 02:07:30 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:30.809 02:07:30 -- json_config/json_config.sh@396 -- # count=100 00:05:30.809 02:07:30 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:30.809 02:07:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.809 02:07:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:30.809 02:07:30 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:31.065 02:07:30 -- json_config/json_config.sh@398 -- # break 00:05:31.065 02:07:30 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:31.065 02:07:30 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:31.065 02:07:30 -- json_config/json_config.sh@120 -- # local app=target 00:05:31.065 02:07:30 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:31.065 02:07:30 -- json_config/json_config.sh@124 -- # [[ -n 67476 ]] 00:05:31.065 02:07:30 -- json_config/json_config.sh@127 -- # kill -SIGINT 67476 00:05:31.065 02:07:30 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:31.065 02:07:30 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:31.065 02:07:30 -- json_config/json_config.sh@130 -- # kill -0 67476 00:05:31.065 02:07:30 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:31.628 02:07:31 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:31.628 02:07:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:31.628 02:07:31 -- json_config/json_config.sh@130 -- # kill -0 67476 00:05:31.628 02:07:31 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:31.628 02:07:31 -- json_config/json_config.sh@132 -- # break 00:05:31.628 SPDK target shutdown done 00:05:31.628 02:07:31 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:31.628 02:07:31 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:31.628 02:07:31 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:31.628 INFO: relaunching applications... 00:05:31.628 02:07:31 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.628 02:07:31 -- json_config/json_config.sh@98 -- # local app=target 00:05:31.628 02:07:31 -- json_config/json_config.sh@99 -- # shift 00:05:31.628 02:07:31 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:31.628 02:07:31 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:31.628 02:07:31 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:31.628 02:07:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.628 02:07:31 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:31.628 02:07:31 -- json_config/json_config.sh@111 -- # app_pid[$app]=67745 00:05:31.628 02:07:31 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.628 02:07:31 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:31.628 Waiting for target to run... 00:05:31.628 02:07:31 -- json_config/json_config.sh@114 -- # waitforlisten 67745 /var/tmp/spdk_tgt.sock 00:05:31.628 02:07:31 -- common/autotest_common.sh@819 -- # '[' -z 67745 ']' 00:05:31.628 02:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.628 02:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.628 02:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.628 02:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.628 02:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:31.628 [2024-07-15 02:07:31.081568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:31.628 [2024-07-15 02:07:31.081694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67745 ] 00:05:32.191 [2024-07-15 02:07:31.502332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.191 [2024-07-15 02:07:31.557870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.191 [2024-07-15 02:07:31.558062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.449 [2024-07-15 02:07:31.854186] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.449 [2024-07-15 02:07:31.886281] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:32.707 02:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.707 02:07:32 -- common/autotest_common.sh@852 -- # return 0 00:05:32.707 00:05:32.707 02:07:32 -- json_config/json_config.sh@115 -- # echo '' 00:05:32.707 02:07:32 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:32.707 02:07:32 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.707 INFO: Checking if target configuration is the same... 00:05:32.707 02:07:32 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.707 02:07:32 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:32.707 02:07:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.707 + '[' 2 -ne 2 ']' 00:05:32.707 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:32.707 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:32.707 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:32.707 +++ basename /dev/fd/62 00:05:32.707 ++ mktemp /tmp/62.XXX 00:05:32.707 + tmp_file_1=/tmp/62.TZt 00:05:32.707 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.707 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.707 + tmp_file_2=/tmp/spdk_tgt_config.json.yDC 00:05:32.707 + ret=0 00:05:32.707 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.965 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.965 + diff -u /tmp/62.TZt /tmp/spdk_tgt_config.json.yDC 00:05:32.965 + echo 'INFO: JSON config files are the same' 00:05:32.965 INFO: JSON config files are the same 00:05:32.965 + rm /tmp/62.TZt /tmp/spdk_tgt_config.json.yDC 00:05:32.965 + exit 0 00:05:32.965 02:07:32 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:32.965 INFO: changing configuration and checking if this can be detected... 00:05:32.965 02:07:32 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.965 02:07:32 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.965 02:07:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:33.222 02:07:32 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.222 02:07:32 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:33.222 02:07:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.222 + '[' 2 -ne 2 ']' 00:05:33.222 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:33.222 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:33.222 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:33.222 +++ basename /dev/fd/62 00:05:33.222 ++ mktemp /tmp/62.XXX 00:05:33.222 + tmp_file_1=/tmp/62.Xt2 00:05:33.222 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:33.222 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:33.222 + tmp_file_2=/tmp/spdk_tgt_config.json.Uwg 00:05:33.222 + ret=0 00:05:33.222 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.788 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:33.788 + diff -u /tmp/62.Xt2 /tmp/spdk_tgt_config.json.Uwg 00:05:33.788 + ret=1 00:05:33.788 + echo '=== Start of file: /tmp/62.Xt2 ===' 00:05:33.788 + cat /tmp/62.Xt2 00:05:33.788 + echo '=== End of file: /tmp/62.Xt2 ===' 00:05:33.788 + echo '' 00:05:33.788 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Uwg ===' 00:05:33.788 + cat /tmp/spdk_tgt_config.json.Uwg 00:05:33.788 + echo '=== End of file: /tmp/spdk_tgt_config.json.Uwg ===' 00:05:33.788 + echo '' 00:05:33.788 + rm /tmp/62.Xt2 /tmp/spdk_tgt_config.json.Uwg 00:05:33.788 + exit 1 00:05:33.788 INFO: configuration change detected. 00:05:33.788 02:07:33 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:33.788 02:07:33 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:33.788 02:07:33 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:33.788 02:07:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.788 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.788 02:07:33 -- json_config/json_config.sh@360 -- # local ret=0 00:05:33.788 02:07:33 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:33.788 02:07:33 -- json_config/json_config.sh@370 -- # [[ -n 67745 ]] 00:05:33.788 02:07:33 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:33.788 02:07:33 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:33.788 02:07:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.788 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.788 02:07:33 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:33.788 02:07:33 -- json_config/json_config.sh@246 -- # uname -s 00:05:33.788 02:07:33 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:33.788 02:07:33 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:33.788 02:07:33 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:33.788 02:07:33 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:33.788 02:07:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.788 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:33.788 02:07:33 -- json_config/json_config.sh@376 -- # killprocess 67745 00:05:33.788 02:07:33 -- common/autotest_common.sh@926 -- # '[' -z 67745 ']' 00:05:33.788 02:07:33 -- common/autotest_common.sh@930 -- # kill -0 67745 00:05:33.788 02:07:33 -- common/autotest_common.sh@931 -- # uname 00:05:33.788 02:07:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.788 02:07:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67745 00:05:33.788 02:07:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.788 02:07:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.788 killing process with pid 67745 00:05:33.788 02:07:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67745' 00:05:33.788 02:07:33 -- common/autotest_common.sh@945 -- # kill 67745 00:05:33.788 02:07:33 -- common/autotest_common.sh@950 -- # wait 67745 00:05:34.046 02:07:33 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:34.046 02:07:33 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:34.046 02:07:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:34.046 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.046 02:07:33 -- json_config/json_config.sh@381 -- # return 0 00:05:34.046 INFO: Success 00:05:34.046 02:07:33 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:34.046 ************************************ 00:05:34.046 END TEST json_config 00:05:34.046 ************************************ 00:05:34.046 00:05:34.046 real 0m8.157s 00:05:34.046 user 0m11.605s 00:05:34.046 sys 0m1.819s 00:05:34.046 02:07:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.046 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.046 02:07:33 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.046 02:07:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.046 02:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.046 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.046 ************************************ 00:05:34.046 START TEST json_config_extra_key 00:05:34.046 ************************************ 00:05:34.046 02:07:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.046 02:07:33 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.046 02:07:33 -- nvmf/common.sh@7 -- # uname -s 00:05:34.046 02:07:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.046 02:07:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.046 02:07:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.046 02:07:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.046 02:07:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.046 02:07:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.046 02:07:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.046 02:07:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.046 02:07:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.046 02:07:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.305 02:07:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:05:34.305 02:07:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:05:34.305 02:07:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.305 02:07:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.305 02:07:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.305 02:07:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.305 02:07:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.305 02:07:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.305 02:07:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.306 02:07:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.306 02:07:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.306 02:07:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.306 02:07:33 -- paths/export.sh@5 -- # export PATH 00:05:34.306 02:07:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.306 02:07:33 -- nvmf/common.sh@46 -- # : 0 00:05:34.306 02:07:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:34.306 02:07:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:34.306 02:07:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:34.306 02:07:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.306 02:07:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.306 02:07:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:34.306 02:07:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:34.306 02:07:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.306 INFO: launching applications... 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=67920 00:05:34.306 Waiting for target to run... 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 67920 /var/tmp/spdk_tgt.sock 00:05:34.306 02:07:33 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.306 02:07:33 -- common/autotest_common.sh@819 -- # '[' -z 67920 ']' 00:05:34.306 02:07:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.306 02:07:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.306 02:07:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.306 02:07:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.306 02:07:33 -- common/autotest_common.sh@10 -- # set +x 00:05:34.306 [2024-07-15 02:07:33.676901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:34.306 [2024-07-15 02:07:33.677012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67920 ] 00:05:34.564 [2024-07-15 02:07:34.101159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.823 [2024-07-15 02:07:34.153813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.823 [2024-07-15 02:07:34.153952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.389 02:07:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.389 00:05:35.389 02:07:34 -- common/autotest_common.sh@852 -- # return 0 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:35.389 INFO: shutting down applications... 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 67920 ]] 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 67920 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 67920 00:05:35.389 02:07:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@50 -- # kill -0 67920 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:35.647 SPDK target shutdown done 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:35.647 Success 00:05:35.647 02:07:35 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:35.647 00:05:35.647 real 0m1.619s 00:05:35.647 user 0m1.477s 00:05:35.647 sys 0m0.446s 00:05:35.647 02:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.647 ************************************ 00:05:35.648 END TEST json_config_extra_key 00:05:35.648 02:07:35 -- common/autotest_common.sh@10 -- # set +x 00:05:35.648 ************************************ 00:05:35.648 02:07:35 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.648 02:07:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.648 02:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.648 02:07:35 -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 ************************************ 00:05:35.906 START TEST alias_rpc 00:05:35.906 ************************************ 00:05:35.906 02:07:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.906 * Looking for test storage... 00:05:35.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:35.906 02:07:35 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.906 02:07:35 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=67990 00:05:35.906 02:07:35 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 67990 00:05:35.906 02:07:35 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.906 02:07:35 -- common/autotest_common.sh@819 -- # '[' -z 67990 ']' 00:05:35.906 02:07:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.906 02:07:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.906 02:07:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.906 02:07:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.906 02:07:35 -- common/autotest_common.sh@10 -- # set +x 00:05:35.906 [2024-07-15 02:07:35.346254] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:35.906 [2024-07-15 02:07:35.346387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67990 ] 00:05:36.164 [2024-07-15 02:07:35.481120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.164 [2024-07-15 02:07:35.568829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.164 [2024-07-15 02:07:35.568982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.756 02:07:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.756 02:07:36 -- common/autotest_common.sh@852 -- # return 0 00:05:36.756 02:07:36 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:37.324 02:07:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 67990 00:05:37.324 02:07:36 -- common/autotest_common.sh@926 -- # '[' -z 67990 ']' 00:05:37.324 02:07:36 -- common/autotest_common.sh@930 -- # kill -0 67990 00:05:37.324 02:07:36 -- common/autotest_common.sh@931 -- # uname 00:05:37.324 02:07:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.324 02:07:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67990 00:05:37.324 killing process with pid 67990 00:05:37.324 02:07:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.324 02:07:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.324 02:07:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67990' 00:05:37.324 02:07:36 -- common/autotest_common.sh@945 -- # kill 67990 00:05:37.324 02:07:36 -- common/autotest_common.sh@950 -- # wait 67990 00:05:37.582 ************************************ 00:05:37.582 END TEST alias_rpc 00:05:37.582 ************************************ 00:05:37.582 00:05:37.582 real 0m1.757s 00:05:37.582 user 0m1.999s 00:05:37.582 sys 0m0.438s 00:05:37.582 02:07:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.582 02:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 02:07:37 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:05:37.582 02:07:37 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.582 02:07:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.582 02:07:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.582 02:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.582 ************************************ 00:05:37.582 START TEST dpdk_mem_utility 00:05:37.582 ************************************ 00:05:37.582 02:07:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.582 * Looking for test storage... 00:05:37.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:37.582 02:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.582 02:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68081 00:05:37.582 02:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68081 00:05:37.582 02:07:37 -- common/autotest_common.sh@819 -- # '[' -z 68081 ']' 00:05:37.582 02:07:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.582 02:07:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.582 02:07:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.582 02:07:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.582 02:07:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.582 02:07:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 [2024-07-15 02:07:37.142646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:37.840 [2024-07-15 02:07:37.142741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68081 ] 00:05:37.840 [2024-07-15 02:07:37.276418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.840 [2024-07-15 02:07:37.344590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.840 [2024-07-15 02:07:37.344991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.795 02:07:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.795 02:07:38 -- common/autotest_common.sh@852 -- # return 0 00:05:38.795 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.795 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.795 02:07:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:38.796 02:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:38.796 { 00:05:38.796 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.796 } 00:05:38.796 02:07:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:38.796 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.796 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:38.796 1 heaps totaling size 814.000000 MiB 00:05:38.796 size: 814.000000 MiB heap id: 0 00:05:38.796 end heaps---------- 00:05:38.796 8 mempools totaling size 598.116089 MiB 00:05:38.796 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.796 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.796 size: 84.521057 MiB name: bdev_io_68081 00:05:38.796 size: 51.011292 MiB name: evtpool_68081 00:05:38.796 size: 50.003479 MiB name: msgpool_68081 00:05:38.796 size: 21.763794 MiB name: PDU_Pool 00:05:38.796 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.796 size: 0.026123 MiB name: Session_Pool 00:05:38.796 end mempools------- 00:05:38.796 6 memzones totaling size 4.142822 MiB 00:05:38.796 size: 1.000366 MiB name: RG_ring_0_68081 00:05:38.796 size: 1.000366 MiB name: RG_ring_1_68081 00:05:38.796 size: 1.000366 MiB name: RG_ring_4_68081 00:05:38.796 size: 1.000366 MiB name: RG_ring_5_68081 00:05:38.796 size: 0.125366 MiB name: RG_ring_2_68081 00:05:38.796 size: 0.015991 MiB name: RG_ring_3_68081 00:05:38.796 end memzones------- 00:05:38.796 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.796 heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15 00:05:38.796 list of free elements. size: 12.487854 MiB 00:05:38.796 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:38.796 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:38.796 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:38.796 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:38.796 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:38.796 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:38.796 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:38.796 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:38.796 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:38.796 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:05:38.796 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:38.796 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:38.796 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:38.796 element at address: 0x200027e00000 with size: 0.398499 MiB 00:05:38.796 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:38.796 list of standard malloc elements. size: 199.249573 MiB 00:05:38.796 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:38.796 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:38.796 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:38.796 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:38.796 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:38.796 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:38.796 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:38.796 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:38.796 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:38.796 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:38.796 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:38.797 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e66100 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:38.797 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:38.797 list of memzone associated elements. size: 602.262573 MiB 00:05:38.797 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:38.797 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.797 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:38.797 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.797 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:38.797 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68081_0 00:05:38.797 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:38.797 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68081_0 00:05:38.797 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:38.797 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68081_0 00:05:38.797 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:38.797 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.797 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:38.797 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.797 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:38.797 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68081 00:05:38.797 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:38.797 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68081 00:05:38.797 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:38.797 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68081 00:05:38.797 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:38.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.797 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:38.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.797 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:38.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.797 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:38.797 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.797 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:38.797 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68081 00:05:38.797 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:38.797 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68081 00:05:38.797 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:38.797 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68081 00:05:38.797 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:38.797 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68081 00:05:38.797 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:38.797 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68081 00:05:38.797 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:38.797 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.797 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:38.797 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.797 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:38.797 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.797 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:38.797 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68081 00:05:38.797 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:38.797 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.797 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:05:38.797 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.797 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:38.797 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68081 00:05:38.797 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:05:38.797 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.797 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:38.797 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68081 00:05:38.797 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:38.797 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68081 00:05:38.797 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:05:38.797 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.797 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.797 02:07:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68081 00:05:38.797 02:07:38 -- common/autotest_common.sh@926 -- # '[' -z 68081 ']' 00:05:38.797 02:07:38 -- common/autotest_common.sh@930 -- # kill -0 68081 00:05:38.797 02:07:38 -- common/autotest_common.sh@931 -- # uname 00:05:38.797 02:07:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.798 02:07:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68081 00:05:38.798 02:07:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.798 02:07:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.798 02:07:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68081' 00:05:38.798 killing process with pid 68081 00:05:38.798 02:07:38 -- common/autotest_common.sh@945 -- # kill 68081 00:05:38.798 02:07:38 -- common/autotest_common.sh@950 -- # wait 68081 00:05:39.365 00:05:39.365 real 0m1.682s 00:05:39.365 user 0m1.862s 00:05:39.365 sys 0m0.418s 00:05:39.365 ************************************ 00:05:39.365 END TEST dpdk_mem_utility 00:05:39.365 ************************************ 00:05:39.365 02:07:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.365 02:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.365 02:07:38 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.365 02:07:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.365 02:07:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.365 02:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.365 ************************************ 00:05:39.365 START TEST event 00:05:39.365 ************************************ 00:05:39.365 02:07:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.365 * Looking for test storage... 00:05:39.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:39.365 02:07:38 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:39.365 02:07:38 -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.365 02:07:38 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.365 02:07:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:39.365 02:07:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.365 02:07:38 -- common/autotest_common.sh@10 -- # set +x 00:05:39.365 ************************************ 00:05:39.365 START TEST event_perf 00:05:39.365 ************************************ 00:05:39.365 02:07:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.365 Running I/O for 1 seconds...[2024-07-15 02:07:38.854257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:39.365 [2024-07-15 02:07:38.854344] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68175 ] 00:05:39.624 [2024-07-15 02:07:38.989732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.624 [2024-07-15 02:07:39.059348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.624 [2024-07-15 02:07:39.059496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.624 [2024-07-15 02:07:39.059656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.624 [2024-07-15 02:07:39.059659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.999 Running I/O for 1 seconds... 00:05:40.999 lcore 0: 200291 00:05:40.999 lcore 1: 200291 00:05:40.999 lcore 2: 200293 00:05:40.999 lcore 3: 200291 00:05:40.999 done. 00:05:40.999 00:05:40.999 real 0m1.292s 00:05:40.999 user 0m4.113s 00:05:40.999 sys 0m0.060s 00:05:40.999 02:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.999 02:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.999 ************************************ 00:05:40.999 END TEST event_perf 00:05:40.999 ************************************ 00:05:40.999 02:07:40 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.999 02:07:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:40.999 02:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.999 02:07:40 -- common/autotest_common.sh@10 -- # set +x 00:05:40.999 ************************************ 00:05:40.999 START TEST event_reactor 00:05:40.999 ************************************ 00:05:40.999 02:07:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.999 [2024-07-15 02:07:40.198553] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:40.999 [2024-07-15 02:07:40.198697] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68208 ] 00:05:40.999 [2024-07-15 02:07:40.329127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.999 [2024-07-15 02:07:40.402388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.934 test_start 00:05:41.934 oneshot 00:05:41.934 tick 100 00:05:41.934 tick 100 00:05:41.934 tick 250 00:05:41.934 tick 100 00:05:41.934 tick 100 00:05:41.934 tick 250 00:05:41.934 tick 500 00:05:41.934 tick 100 00:05:41.934 tick 100 00:05:41.934 tick 100 00:05:41.934 tick 250 00:05:41.934 tick 100 00:05:41.934 tick 100 00:05:41.934 test_end 00:05:41.934 00:05:41.934 real 0m1.283s 00:05:41.934 user 0m1.129s 00:05:41.934 sys 0m0.049s 00:05:41.934 02:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.934 ************************************ 00:05:41.934 END TEST event_reactor 00:05:41.934 ************************************ 00:05:41.934 02:07:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 02:07:41 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.192 02:07:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:42.192 02:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.192 02:07:41 -- common/autotest_common.sh@10 -- # set +x 00:05:42.192 ************************************ 00:05:42.192 START TEST event_reactor_perf 00:05:42.192 ************************************ 00:05:42.192 02:07:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.192 [2024-07-15 02:07:41.539448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:42.192 [2024-07-15 02:07:41.539550] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68238 ] 00:05:42.192 [2024-07-15 02:07:41.675084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.449 [2024-07-15 02:07:41.752336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.388 test_start 00:05:43.388 test_end 00:05:43.388 Performance: 419986 events per second 00:05:43.388 00:05:43.388 real 0m1.297s 00:05:43.388 user 0m1.134s 00:05:43.388 sys 0m0.058s 00:05:43.388 02:07:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.388 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.388 ************************************ 00:05:43.388 END TEST event_reactor_perf 00:05:43.388 ************************************ 00:05:43.388 02:07:42 -- event/event.sh@49 -- # uname -s 00:05:43.388 02:07:42 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.388 02:07:42 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.388 02:07:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.388 02:07:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.388 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.388 ************************************ 00:05:43.388 START TEST event_scheduler 00:05:43.388 ************************************ 00:05:43.388 02:07:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.646 * Looking for test storage... 00:05:43.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:43.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.646 02:07:42 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:43.646 02:07:42 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68304 00:05:43.646 02:07:42 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.646 02:07:42 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:43.646 02:07:42 -- scheduler/scheduler.sh@37 -- # waitforlisten 68304 00:05:43.646 02:07:42 -- common/autotest_common.sh@819 -- # '[' -z 68304 ']' 00:05:43.646 02:07:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.646 02:07:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.646 02:07:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.646 02:07:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.646 02:07:42 -- common/autotest_common.sh@10 -- # set +x 00:05:43.646 [2024-07-15 02:07:43.005786] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:43.646 [2024-07-15 02:07:43.006199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68304 ] 00:05:43.646 [2024-07-15 02:07:43.144923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.904 [2024-07-15 02:07:43.241369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.904 [2024-07-15 02:07:43.241511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.904 [2024-07-15 02:07:43.241657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.904 [2024-07-15 02:07:43.241659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.468 02:07:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.468 02:07:43 -- common/autotest_common.sh@852 -- # return 0 00:05:44.468 02:07:43 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.468 02:07:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.468 02:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:44.468 POWER: Env isn't set yet! 00:05:44.468 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:44.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.468 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.468 POWER: Attempting to initialise PSTAT power management... 00:05:44.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.468 POWER: Cannot set governor of lcore 0 to performance 00:05:44.468 POWER: Attempting to initialise CPPC power management... 00:05:44.468 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.468 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.468 POWER: Attempting to initialise VM power management... 00:05:44.468 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:44.468 POWER: Unable to set Power Management Environment for lcore 0 00:05:44.468 [2024-07-15 02:07:43.996588] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:44.468 [2024-07-15 02:07:43.996615] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:44.468 [2024-07-15 02:07:43.996625] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.468 [2024-07-15 02:07:43.996638] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.468 [2024-07-15 02:07:43.996645] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.468 [2024-07-15 02:07:43.996653] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.468 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.468 02:07:44 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.468 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.468 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 [2024-07-15 02:07:44.090570] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:44.726 02:07:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.726 02:07:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 ************************************ 00:05:44.726 START TEST scheduler_create_thread 00:05:44.726 ************************************ 00:05:44.726 02:07:44 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 2 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 3 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 4 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 5 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 6 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 7 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 8 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 9 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 10 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.726 02:07:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.726 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.726 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:45.293 02:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.293 02:07:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.293 02:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.293 02:07:44 -- common/autotest_common.sh@10 -- # set +x 00:05:46.666 02:07:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.666 02:07:46 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.666 02:07:46 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.666 02:07:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.666 02:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.039 ************************************ 00:05:48.039 END TEST scheduler_create_thread 00:05:48.039 ************************************ 00:05:48.039 02:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.039 00:05:48.039 real 0m3.093s 00:05:48.039 user 0m0.019s 00:05:48.039 sys 0m0.005s 00:05:48.039 02:07:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.039 02:07:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.039 02:07:47 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:48.039 02:07:47 -- scheduler/scheduler.sh@46 -- # killprocess 68304 00:05:48.039 02:07:47 -- common/autotest_common.sh@926 -- # '[' -z 68304 ']' 00:05:48.039 02:07:47 -- common/autotest_common.sh@930 -- # kill -0 68304 00:05:48.039 02:07:47 -- common/autotest_common.sh@931 -- # uname 00:05:48.039 02:07:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.039 02:07:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68304 00:05:48.039 02:07:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:48.039 killing process with pid 68304 00:05:48.039 02:07:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:48.039 02:07:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68304' 00:05:48.039 02:07:47 -- common/autotest_common.sh@945 -- # kill 68304 00:05:48.039 02:07:47 -- common/autotest_common.sh@950 -- # wait 68304 00:05:48.039 [2024-07-15 02:07:47.575068] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:48.297 ************************************ 00:05:48.297 END TEST event_scheduler 00:05:48.297 ************************************ 00:05:48.297 00:05:48.297 real 0m4.935s 00:05:48.297 user 0m9.722s 00:05:48.297 sys 0m0.363s 00:05:48.297 02:07:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.297 02:07:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.297 02:07:47 -- event/event.sh@51 -- # modprobe -n nbd 00:05:48.297 02:07:47 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:48.297 02:07:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.297 02:07:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.297 02:07:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.555 ************************************ 00:05:48.555 START TEST app_repeat 00:05:48.555 ************************************ 00:05:48.555 02:07:47 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:48.555 02:07:47 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.555 02:07:47 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.555 02:07:47 -- event/event.sh@13 -- # local nbd_list 00:05:48.555 02:07:47 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.555 02:07:47 -- event/event.sh@14 -- # local bdev_list 00:05:48.555 02:07:47 -- event/event.sh@15 -- # local repeat_times=4 00:05:48.555 02:07:47 -- event/event.sh@17 -- # modprobe nbd 00:05:48.555 02:07:47 -- event/event.sh@19 -- # repeat_pid=68422 00:05:48.555 02:07:47 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.555 02:07:47 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:48.555 02:07:47 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68422' 00:05:48.555 Process app_repeat pid: 68422 00:05:48.555 02:07:47 -- event/event.sh@23 -- # for i in {0..2} 00:05:48.555 spdk_app_start Round 0 00:05:48.555 02:07:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:48.555 02:07:47 -- event/event.sh@25 -- # waitforlisten 68422 /var/tmp/spdk-nbd.sock 00:05:48.555 02:07:47 -- common/autotest_common.sh@819 -- # '[' -z 68422 ']' 00:05:48.555 02:07:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.555 02:07:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.555 02:07:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.555 02:07:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.555 02:07:47 -- common/autotest_common.sh@10 -- # set +x 00:05:48.555 [2024-07-15 02:07:47.885681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:48.555 [2024-07-15 02:07:47.885755] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68422 ] 00:05:48.555 [2024-07-15 02:07:48.019546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.555 [2024-07-15 02:07:48.103384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.555 [2024-07-15 02:07:48.103402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.489 02:07:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.490 02:07:48 -- common/autotest_common.sh@852 -- # return 0 00:05:49.490 02:07:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.747 Malloc0 00:05:49.747 02:07:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.005 Malloc1 00:05:50.005 02:07:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.005 02:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.264 /dev/nbd0 00:05:50.264 02:07:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.264 02:07:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.264 02:07:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:50.264 02:07:49 -- common/autotest_common.sh@857 -- # local i 00:05:50.264 02:07:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:50.264 02:07:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:50.264 02:07:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:50.264 02:07:49 -- common/autotest_common.sh@861 -- # break 00:05:50.264 02:07:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:50.264 02:07:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:50.264 02:07:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.264 1+0 records in 00:05:50.264 1+0 records out 00:05:50.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310366 s, 13.2 MB/s 00:05:50.264 02:07:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.264 02:07:49 -- common/autotest_common.sh@874 -- # size=4096 00:05:50.264 02:07:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.264 02:07:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:50.264 02:07:49 -- common/autotest_common.sh@877 -- # return 0 00:05:50.264 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.264 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.264 02:07:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.522 /dev/nbd1 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.522 02:07:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:50.522 02:07:49 -- common/autotest_common.sh@857 -- # local i 00:05:50.522 02:07:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:50.522 02:07:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:50.522 02:07:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:50.522 02:07:49 -- common/autotest_common.sh@861 -- # break 00:05:50.522 02:07:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:50.522 02:07:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:50.522 02:07:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.522 1+0 records in 00:05:50.522 1+0 records out 00:05:50.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266389 s, 15.4 MB/s 00:05:50.522 02:07:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.522 02:07:49 -- common/autotest_common.sh@874 -- # size=4096 00:05:50.522 02:07:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.522 02:07:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:50.522 02:07:49 -- common/autotest_common.sh@877 -- # return 0 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.522 02:07:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.780 { 00:05:50.780 "bdev_name": "Malloc0", 00:05:50.780 "nbd_device": "/dev/nbd0" 00:05:50.780 }, 00:05:50.780 { 00:05:50.780 "bdev_name": "Malloc1", 00:05:50.780 "nbd_device": "/dev/nbd1" 00:05:50.780 } 00:05:50.780 ]' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.780 { 00:05:50.780 "bdev_name": "Malloc0", 00:05:50.780 "nbd_device": "/dev/nbd0" 00:05:50.780 }, 00:05:50.780 { 00:05:50.780 "bdev_name": "Malloc1", 00:05:50.780 "nbd_device": "/dev/nbd1" 00:05:50.780 } 00:05:50.780 ]' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.780 /dev/nbd1' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.780 /dev/nbd1' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.780 256+0 records in 00:05:50.780 256+0 records out 00:05:50.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00842147 s, 125 MB/s 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.780 256+0 records in 00:05:50.780 256+0 records out 00:05:50.780 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246936 s, 42.5 MB/s 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.780 02:07:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.039 256+0 records in 00:05:51.039 256+0 records out 00:05:51.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263765 s, 39.8 MB/s 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.039 02:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@41 -- # break 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.297 02:07:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@41 -- # break 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.555 02:07:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@65 -- # true 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.813 02:07:51 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.813 02:07:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.071 02:07:51 -- event/event.sh@35 -- # sleep 3 00:05:52.329 [2024-07-15 02:07:51.709114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.329 [2024-07-15 02:07:51.761261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.329 [2024-07-15 02:07:51.761272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.329 [2024-07-15 02:07:51.816728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.329 [2024-07-15 02:07:51.816777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.609 02:07:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.609 spdk_app_start Round 1 00:05:55.609 02:07:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:55.609 02:07:54 -- event/event.sh@25 -- # waitforlisten 68422 /var/tmp/spdk-nbd.sock 00:05:55.609 02:07:54 -- common/autotest_common.sh@819 -- # '[' -z 68422 ']' 00:05:55.609 02:07:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.609 02:07:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.610 02:07:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.610 02:07:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.610 02:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:55.610 02:07:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.610 02:07:54 -- common/autotest_common.sh@852 -- # return 0 00:05:55.610 02:07:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.610 Malloc0 00:05:55.610 02:07:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.868 Malloc1 00:05:55.868 02:07:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.868 02:07:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.127 /dev/nbd0 00:05:56.127 02:07:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.127 02:07:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.127 02:07:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:56.127 02:07:55 -- common/autotest_common.sh@857 -- # local i 00:05:56.127 02:07:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:56.127 02:07:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:56.127 02:07:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:56.127 02:07:55 -- common/autotest_common.sh@861 -- # break 00:05:56.127 02:07:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:56.127 02:07:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:56.127 02:07:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.127 1+0 records in 00:05:56.127 1+0 records out 00:05:56.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221467 s, 18.5 MB/s 00:05:56.127 02:07:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.127 02:07:55 -- common/autotest_common.sh@874 -- # size=4096 00:05:56.127 02:07:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.127 02:07:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:56.127 02:07:55 -- common/autotest_common.sh@877 -- # return 0 00:05:56.127 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.127 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.127 02:07:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.385 /dev/nbd1 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.385 02:07:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:56.385 02:07:55 -- common/autotest_common.sh@857 -- # local i 00:05:56.385 02:07:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:56.385 02:07:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:56.385 02:07:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:56.385 02:07:55 -- common/autotest_common.sh@861 -- # break 00:05:56.385 02:07:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:56.385 02:07:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:56.385 02:07:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.385 1+0 records in 00:05:56.385 1+0 records out 00:05:56.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028884 s, 14.2 MB/s 00:05:56.385 02:07:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.385 02:07:55 -- common/autotest_common.sh@874 -- # size=4096 00:05:56.385 02:07:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:56.385 02:07:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:56.385 02:07:55 -- common/autotest_common.sh@877 -- # return 0 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.385 02:07:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.643 { 00:05:56.643 "bdev_name": "Malloc0", 00:05:56.643 "nbd_device": "/dev/nbd0" 00:05:56.643 }, 00:05:56.643 { 00:05:56.643 "bdev_name": "Malloc1", 00:05:56.643 "nbd_device": "/dev/nbd1" 00:05:56.643 } 00:05:56.643 ]' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.643 { 00:05:56.643 "bdev_name": "Malloc0", 00:05:56.643 "nbd_device": "/dev/nbd0" 00:05:56.643 }, 00:05:56.643 { 00:05:56.643 "bdev_name": "Malloc1", 00:05:56.643 "nbd_device": "/dev/nbd1" 00:05:56.643 } 00:05:56.643 ]' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.643 /dev/nbd1' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.643 /dev/nbd1' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.643 02:07:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.644 256+0 records in 00:05:56.644 256+0 records out 00:05:56.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00857185 s, 122 MB/s 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.644 256+0 records in 00:05:56.644 256+0 records out 00:05:56.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243142 s, 43.1 MB/s 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.644 256+0 records in 00:05:56.644 256+0 records out 00:05:56.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295015 s, 35.5 MB/s 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.644 02:07:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@41 -- # break 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.901 02:07:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@41 -- # break 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.159 02:07:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@65 -- # true 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.418 02:07:56 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.418 02:07:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.676 02:07:57 -- event/event.sh@35 -- # sleep 3 00:05:57.934 [2024-07-15 02:07:57.356256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.934 [2024-07-15 02:07:57.405847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.934 [2024-07-15 02:07:57.405856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.934 [2024-07-15 02:07:57.463850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.934 [2024-07-15 02:07:57.463915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.294 02:08:00 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.294 spdk_app_start Round 2 00:06:01.294 02:08:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.294 02:08:00 -- event/event.sh@25 -- # waitforlisten 68422 /var/tmp/spdk-nbd.sock 00:06:01.294 02:08:00 -- common/autotest_common.sh@819 -- # '[' -z 68422 ']' 00:06:01.294 02:08:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.294 02:08:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.294 02:08:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.294 02:08:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.294 02:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:01.294 02:08:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.294 02:08:00 -- common/autotest_common.sh@852 -- # return 0 00:06:01.294 02:08:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.294 Malloc0 00:06:01.294 02:08:00 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.553 Malloc1 00:06:01.553 02:08:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@12 -- # local i 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.553 02:08:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.811 /dev/nbd0 00:06:01.811 02:08:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.811 02:08:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.811 02:08:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:01.811 02:08:01 -- common/autotest_common.sh@857 -- # local i 00:06:01.811 02:08:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:01.811 02:08:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:01.811 02:08:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:01.811 02:08:01 -- common/autotest_common.sh@861 -- # break 00:06:01.811 02:08:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:01.811 02:08:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:01.811 02:08:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.811 1+0 records in 00:06:01.811 1+0 records out 00:06:01.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322028 s, 12.7 MB/s 00:06:01.811 02:08:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.811 02:08:01 -- common/autotest_common.sh@874 -- # size=4096 00:06:01.811 02:08:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.811 02:08:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:01.811 02:08:01 -- common/autotest_common.sh@877 -- # return 0 00:06:01.811 02:08:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.811 02:08:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.811 02:08:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.069 /dev/nbd1 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.069 02:08:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:02.069 02:08:01 -- common/autotest_common.sh@857 -- # local i 00:06:02.069 02:08:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:02.069 02:08:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:02.069 02:08:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:02.069 02:08:01 -- common/autotest_common.sh@861 -- # break 00:06:02.069 02:08:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:02.069 02:08:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:02.069 02:08:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.069 1+0 records in 00:06:02.069 1+0 records out 00:06:02.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290145 s, 14.1 MB/s 00:06:02.069 02:08:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.069 02:08:01 -- common/autotest_common.sh@874 -- # size=4096 00:06:02.069 02:08:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.069 02:08:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:02.069 02:08:01 -- common/autotest_common.sh@877 -- # return 0 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.069 02:08:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.328 { 00:06:02.328 "bdev_name": "Malloc0", 00:06:02.328 "nbd_device": "/dev/nbd0" 00:06:02.328 }, 00:06:02.328 { 00:06:02.328 "bdev_name": "Malloc1", 00:06:02.328 "nbd_device": "/dev/nbd1" 00:06:02.328 } 00:06:02.328 ]' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.328 { 00:06:02.328 "bdev_name": "Malloc0", 00:06:02.328 "nbd_device": "/dev/nbd0" 00:06:02.328 }, 00:06:02.328 { 00:06:02.328 "bdev_name": "Malloc1", 00:06:02.328 "nbd_device": "/dev/nbd1" 00:06:02.328 } 00:06:02.328 ]' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.328 /dev/nbd1' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.328 /dev/nbd1' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.328 256+0 records in 00:06:02.328 256+0 records out 00:06:02.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614561 s, 171 MB/s 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.328 256+0 records in 00:06:02.328 256+0 records out 00:06:02.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251642 s, 41.7 MB/s 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.328 02:08:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.587 256+0 records in 00:06:02.588 256+0 records out 00:06:02.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293376 s, 35.7 MB/s 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.588 02:08:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.588 02:08:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.847 02:08:02 -- bdev/nbd_common.sh@41 -- # break 00:06:02.847 02:08:02 -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.847 02:08:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.847 02:08:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@41 -- # break 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.105 02:08:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@65 -- # true 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.364 02:08:02 -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.364 02:08:02 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.622 02:08:03 -- event/event.sh@35 -- # sleep 3 00:06:03.879 [2024-07-15 02:08:03.218977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.879 [2024-07-15 02:08:03.264585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.879 [2024-07-15 02:08:03.264591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.879 [2024-07-15 02:08:03.319448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.879 [2024-07-15 02:08:03.319524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.156 02:08:06 -- event/event.sh@38 -- # waitforlisten 68422 /var/tmp/spdk-nbd.sock 00:06:07.156 02:08:06 -- common/autotest_common.sh@819 -- # '[' -z 68422 ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.156 02:08:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.156 02:08:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.156 02:08:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.156 02:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 02:08:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.156 02:08:06 -- common/autotest_common.sh@852 -- # return 0 00:06:07.156 02:08:06 -- event/event.sh@39 -- # killprocess 68422 00:06:07.156 02:08:06 -- common/autotest_common.sh@926 -- # '[' -z 68422 ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@930 -- # kill -0 68422 00:06:07.156 02:08:06 -- common/autotest_common.sh@931 -- # uname 00:06:07.156 02:08:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68422 00:06:07.156 killing process with pid 68422 00:06:07.156 02:08:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:07.156 02:08:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68422' 00:06:07.156 02:08:06 -- common/autotest_common.sh@945 -- # kill 68422 00:06:07.156 02:08:06 -- common/autotest_common.sh@950 -- # wait 68422 00:06:07.156 spdk_app_start is called in Round 0. 00:06:07.156 Shutdown signal received, stop current app iteration 00:06:07.156 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:07.156 spdk_app_start is called in Round 1. 00:06:07.156 Shutdown signal received, stop current app iteration 00:06:07.156 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:07.156 spdk_app_start is called in Round 2. 00:06:07.156 Shutdown signal received, stop current app iteration 00:06:07.156 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:07.156 spdk_app_start is called in Round 3. 00:06:07.156 Shutdown signal received, stop current app iteration 00:06:07.156 ************************************ 00:06:07.156 END TEST app_repeat 00:06:07.156 ************************************ 00:06:07.156 02:08:06 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.156 02:08:06 -- event/event.sh@42 -- # return 0 00:06:07.156 00:06:07.156 real 0m18.665s 00:06:07.156 user 0m41.843s 00:06:07.156 sys 0m2.943s 00:06:07.156 02:08:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.156 02:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 02:08:06 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.156 02:08:06 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:07.156 02:08:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.156 02:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 ************************************ 00:06:07.156 START TEST cpu_locks 00:06:07.156 ************************************ 00:06:07.156 02:08:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:07.156 * Looking for test storage... 00:06:07.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:07.156 02:08:06 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.156 02:08:06 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.156 02:08:06 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.156 02:08:06 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.156 02:08:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.156 02:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 ************************************ 00:06:07.156 START TEST default_locks 00:06:07.156 ************************************ 00:06:07.156 02:08:06 -- common/autotest_common.sh@1104 -- # default_locks 00:06:07.156 02:08:06 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69044 00:06:07.156 02:08:06 -- event/cpu_locks.sh@47 -- # waitforlisten 69044 00:06:07.156 02:08:06 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.156 02:08:06 -- common/autotest_common.sh@819 -- # '[' -z 69044 ']' 00:06:07.156 02:08:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.156 02:08:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.156 02:08:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.156 02:08:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.156 02:08:06 -- common/autotest_common.sh@10 -- # set +x 00:06:07.415 [2024-07-15 02:08:06.738699] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:07.415 [2024-07-15 02:08:06.738838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69044 ] 00:06:07.415 [2024-07-15 02:08:06.872349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.415 [2024-07-15 02:08:06.940544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.415 [2024-07-15 02:08:06.940736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.351 02:08:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.351 02:08:07 -- common/autotest_common.sh@852 -- # return 0 00:06:08.351 02:08:07 -- event/cpu_locks.sh@49 -- # locks_exist 69044 00:06:08.351 02:08:07 -- event/cpu_locks.sh@22 -- # lslocks -p 69044 00:06:08.351 02:08:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.609 02:08:08 -- event/cpu_locks.sh@50 -- # killprocess 69044 00:06:08.609 02:08:08 -- common/autotest_common.sh@926 -- # '[' -z 69044 ']' 00:06:08.609 02:08:08 -- common/autotest_common.sh@930 -- # kill -0 69044 00:06:08.609 02:08:08 -- common/autotest_common.sh@931 -- # uname 00:06:08.609 02:08:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.609 02:08:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69044 00:06:08.609 02:08:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.609 02:08:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.609 02:08:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69044' 00:06:08.609 killing process with pid 69044 00:06:08.609 02:08:08 -- common/autotest_common.sh@945 -- # kill 69044 00:06:08.609 02:08:08 -- common/autotest_common.sh@950 -- # wait 69044 00:06:09.177 02:08:08 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69044 00:06:09.177 02:08:08 -- common/autotest_common.sh@640 -- # local es=0 00:06:09.177 02:08:08 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69044 00:06:09.177 02:08:08 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:09.177 02:08:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.177 02:08:08 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:09.177 02:08:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:09.177 02:08:08 -- common/autotest_common.sh@643 -- # waitforlisten 69044 00:06:09.177 02:08:08 -- common/autotest_common.sh@819 -- # '[' -z 69044 ']' 00:06:09.177 02:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.177 02:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.177 02:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.177 02:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.177 ERROR: process (pid: 69044) is no longer running 00:06:09.177 02:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69044) - No such process 00:06:09.177 02:08:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.177 02:08:08 -- common/autotest_common.sh@852 -- # return 1 00:06:09.177 02:08:08 -- common/autotest_common.sh@643 -- # es=1 00:06:09.177 02:08:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:09.177 02:08:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:09.177 02:08:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:09.177 02:08:08 -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.177 02:08:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.177 02:08:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.177 02:08:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.177 00:06:09.177 real 0m1.819s 00:06:09.177 user 0m1.901s 00:06:09.177 sys 0m0.581s 00:06:09.177 02:08:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.177 ************************************ 00:06:09.177 END TEST default_locks 00:06:09.177 ************************************ 00:06:09.177 02:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.177 02:08:08 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.177 02:08:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.177 02:08:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.177 02:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.177 ************************************ 00:06:09.177 START TEST default_locks_via_rpc 00:06:09.177 ************************************ 00:06:09.177 02:08:08 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:09.177 02:08:08 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69104 00:06:09.177 02:08:08 -- event/cpu_locks.sh@63 -- # waitforlisten 69104 00:06:09.177 02:08:08 -- common/autotest_common.sh@819 -- # '[' -z 69104 ']' 00:06:09.177 02:08:08 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.177 02:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.177 02:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.177 02:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.177 02:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.177 02:08:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.177 [2024-07-15 02:08:08.617283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:09.177 [2024-07-15 02:08:08.617460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69104 ] 00:06:09.436 [2024-07-15 02:08:08.760234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.436 [2024-07-15 02:08:08.853317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.436 [2024-07-15 02:08:08.853519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.371 02:08:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.371 02:08:09 -- common/autotest_common.sh@852 -- # return 0 00:06:10.371 02:08:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.371 02:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.371 02:08:09 -- common/autotest_common.sh@10 -- # set +x 00:06:10.371 02:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.371 02:08:09 -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.371 02:08:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.371 02:08:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.371 02:08:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.371 02:08:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.371 02:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.371 02:08:09 -- common/autotest_common.sh@10 -- # set +x 00:06:10.371 02:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.371 02:08:09 -- event/cpu_locks.sh@71 -- # locks_exist 69104 00:06:10.371 02:08:09 -- event/cpu_locks.sh@22 -- # lslocks -p 69104 00:06:10.371 02:08:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.629 02:08:10 -- event/cpu_locks.sh@73 -- # killprocess 69104 00:06:10.629 02:08:10 -- common/autotest_common.sh@926 -- # '[' -z 69104 ']' 00:06:10.629 02:08:10 -- common/autotest_common.sh@930 -- # kill -0 69104 00:06:10.629 02:08:10 -- common/autotest_common.sh@931 -- # uname 00:06:10.629 02:08:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.629 02:08:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69104 00:06:10.629 killing process with pid 69104 00:06:10.629 02:08:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.629 02:08:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.629 02:08:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69104' 00:06:10.629 02:08:10 -- common/autotest_common.sh@945 -- # kill 69104 00:06:10.629 02:08:10 -- common/autotest_common.sh@950 -- # wait 69104 00:06:11.197 ************************************ 00:06:11.197 END TEST default_locks_via_rpc 00:06:11.197 ************************************ 00:06:11.197 00:06:11.197 real 0m1.895s 00:06:11.197 user 0m2.004s 00:06:11.197 sys 0m0.603s 00:06:11.197 02:08:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.197 02:08:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.197 02:08:10 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.197 02:08:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.197 02:08:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.197 02:08:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.197 ************************************ 00:06:11.197 START TEST non_locking_app_on_locked_coremask 00:06:11.197 ************************************ 00:06:11.197 02:08:10 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:11.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.197 02:08:10 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69173 00:06:11.197 02:08:10 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.197 02:08:10 -- event/cpu_locks.sh@81 -- # waitforlisten 69173 /var/tmp/spdk.sock 00:06:11.197 02:08:10 -- common/autotest_common.sh@819 -- # '[' -z 69173 ']' 00:06:11.197 02:08:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.197 02:08:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.197 02:08:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.197 02:08:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.197 02:08:10 -- common/autotest_common.sh@10 -- # set +x 00:06:11.197 [2024-07-15 02:08:10.559694] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:11.197 [2024-07-15 02:08:10.559797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69173 ] 00:06:11.197 [2024-07-15 02:08:10.694788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.455 [2024-07-15 02:08:10.771975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.455 [2024-07-15 02:08:10.772161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.022 02:08:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.022 02:08:11 -- common/autotest_common.sh@852 -- # return 0 00:06:12.022 02:08:11 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69201 00:06:12.022 02:08:11 -- event/cpu_locks.sh@85 -- # waitforlisten 69201 /var/tmp/spdk2.sock 00:06:12.022 02:08:11 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.022 02:08:11 -- common/autotest_common.sh@819 -- # '[' -z 69201 ']' 00:06:12.022 02:08:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.022 02:08:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.022 02:08:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.022 02:08:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.022 02:08:11 -- common/autotest_common.sh@10 -- # set +x 00:06:12.022 [2024-07-15 02:08:11.531439] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:12.022 [2024-07-15 02:08:11.531808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69201 ] 00:06:12.280 [2024-07-15 02:08:11.677019] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.280 [2024-07-15 02:08:11.677091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.537 [2024-07-15 02:08:11.848996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.537 [2024-07-15 02:08:11.849193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.110 02:08:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.110 02:08:12 -- common/autotest_common.sh@852 -- # return 0 00:06:13.110 02:08:12 -- event/cpu_locks.sh@87 -- # locks_exist 69173 00:06:13.110 02:08:12 -- event/cpu_locks.sh@22 -- # lslocks -p 69173 00:06:13.110 02:08:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.045 02:08:13 -- event/cpu_locks.sh@89 -- # killprocess 69173 00:06:14.045 02:08:13 -- common/autotest_common.sh@926 -- # '[' -z 69173 ']' 00:06:14.045 02:08:13 -- common/autotest_common.sh@930 -- # kill -0 69173 00:06:14.045 02:08:13 -- common/autotest_common.sh@931 -- # uname 00:06:14.045 02:08:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.045 02:08:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69173 00:06:14.045 killing process with pid 69173 00:06:14.045 02:08:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.045 02:08:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.045 02:08:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69173' 00:06:14.045 02:08:13 -- common/autotest_common.sh@945 -- # kill 69173 00:06:14.045 02:08:13 -- common/autotest_common.sh@950 -- # wait 69173 00:06:14.611 02:08:14 -- event/cpu_locks.sh@90 -- # killprocess 69201 00:06:14.611 02:08:14 -- common/autotest_common.sh@926 -- # '[' -z 69201 ']' 00:06:14.611 02:08:14 -- common/autotest_common.sh@930 -- # kill -0 69201 00:06:14.611 02:08:14 -- common/autotest_common.sh@931 -- # uname 00:06:14.611 02:08:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:14.611 02:08:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69201 00:06:14.611 02:08:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:14.611 02:08:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:14.611 killing process with pid 69201 00:06:14.611 02:08:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69201' 00:06:14.611 02:08:14 -- common/autotest_common.sh@945 -- # kill 69201 00:06:14.611 02:08:14 -- common/autotest_common.sh@950 -- # wait 69201 00:06:14.869 00:06:14.869 real 0m3.917s 00:06:14.869 user 0m4.347s 00:06:14.869 sys 0m1.091s 00:06:14.869 02:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.869 ************************************ 00:06:14.869 END TEST non_locking_app_on_locked_coremask 00:06:14.869 ************************************ 00:06:14.869 02:08:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.127 02:08:14 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.127 02:08:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.127 02:08:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.127 02:08:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.127 ************************************ 00:06:15.127 START TEST locking_app_on_unlocked_coremask 00:06:15.127 ************************************ 00:06:15.127 02:08:14 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:15.127 02:08:14 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69280 00:06:15.127 02:08:14 -- event/cpu_locks.sh@99 -- # waitforlisten 69280 /var/tmp/spdk.sock 00:06:15.127 02:08:14 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.127 02:08:14 -- common/autotest_common.sh@819 -- # '[' -z 69280 ']' 00:06:15.127 02:08:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.127 02:08:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.127 02:08:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.127 02:08:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.127 02:08:14 -- common/autotest_common.sh@10 -- # set +x 00:06:15.127 [2024-07-15 02:08:14.534508] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:15.127 [2024-07-15 02:08:14.534647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69280 ] 00:06:15.127 [2024-07-15 02:08:14.667294] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.127 [2024-07-15 02:08:14.667339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.384 [2024-07-15 02:08:14.746681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.384 [2024-07-15 02:08:14.746883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.950 02:08:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.950 02:08:15 -- common/autotest_common.sh@852 -- # return 0 00:06:15.950 02:08:15 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69308 00:06:15.950 02:08:15 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.950 02:08:15 -- event/cpu_locks.sh@103 -- # waitforlisten 69308 /var/tmp/spdk2.sock 00:06:15.950 02:08:15 -- common/autotest_common.sh@819 -- # '[' -z 69308 ']' 00:06:15.950 02:08:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.950 02:08:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.950 02:08:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.950 02:08:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.950 02:08:15 -- common/autotest_common.sh@10 -- # set +x 00:06:16.208 [2024-07-15 02:08:15.518576] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:16.208 [2024-07-15 02:08:15.518705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69308 ] 00:06:16.208 [2024-07-15 02:08:15.657478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.465 [2024-07-15 02:08:15.793184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.465 [2024-07-15 02:08:15.793370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.030 02:08:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.030 02:08:16 -- common/autotest_common.sh@852 -- # return 0 00:06:17.030 02:08:16 -- event/cpu_locks.sh@105 -- # locks_exist 69308 00:06:17.030 02:08:16 -- event/cpu_locks.sh@22 -- # lslocks -p 69308 00:06:17.030 02:08:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.966 02:08:17 -- event/cpu_locks.sh@107 -- # killprocess 69280 00:06:17.966 02:08:17 -- common/autotest_common.sh@926 -- # '[' -z 69280 ']' 00:06:17.966 02:08:17 -- common/autotest_common.sh@930 -- # kill -0 69280 00:06:17.966 02:08:17 -- common/autotest_common.sh@931 -- # uname 00:06:17.966 02:08:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.966 02:08:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69280 00:06:17.966 02:08:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.966 02:08:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.966 killing process with pid 69280 00:06:17.966 02:08:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69280' 00:06:17.966 02:08:17 -- common/autotest_common.sh@945 -- # kill 69280 00:06:17.966 02:08:17 -- common/autotest_common.sh@950 -- # wait 69280 00:06:18.533 02:08:17 -- event/cpu_locks.sh@108 -- # killprocess 69308 00:06:18.533 02:08:17 -- common/autotest_common.sh@926 -- # '[' -z 69308 ']' 00:06:18.533 02:08:17 -- common/autotest_common.sh@930 -- # kill -0 69308 00:06:18.533 02:08:17 -- common/autotest_common.sh@931 -- # uname 00:06:18.533 02:08:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.533 02:08:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69308 00:06:18.533 02:08:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.533 02:08:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.533 killing process with pid 69308 00:06:18.533 02:08:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69308' 00:06:18.533 02:08:17 -- common/autotest_common.sh@945 -- # kill 69308 00:06:18.533 02:08:17 -- common/autotest_common.sh@950 -- # wait 69308 00:06:18.791 00:06:18.791 real 0m3.824s 00:06:18.791 user 0m4.176s 00:06:18.791 sys 0m1.097s 00:06:18.791 02:08:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.791 02:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:18.791 ************************************ 00:06:18.791 END TEST locking_app_on_unlocked_coremask 00:06:18.791 ************************************ 00:06:18.791 02:08:18 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.791 02:08:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.791 02:08:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.791 02:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:19.049 ************************************ 00:06:19.049 START TEST locking_app_on_locked_coremask 00:06:19.049 ************************************ 00:06:19.049 02:08:18 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:19.049 02:08:18 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69387 00:06:19.049 02:08:18 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.049 02:08:18 -- event/cpu_locks.sh@116 -- # waitforlisten 69387 /var/tmp/spdk.sock 00:06:19.049 02:08:18 -- common/autotest_common.sh@819 -- # '[' -z 69387 ']' 00:06:19.049 02:08:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.049 02:08:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.049 02:08:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.049 02:08:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.049 02:08:18 -- common/autotest_common.sh@10 -- # set +x 00:06:19.049 [2024-07-15 02:08:18.409889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:19.049 [2024-07-15 02:08:18.409989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69387 ] 00:06:19.049 [2024-07-15 02:08:18.547035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.392 [2024-07-15 02:08:18.607869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.392 [2024-07-15 02:08:18.608102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.959 02:08:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.959 02:08:19 -- common/autotest_common.sh@852 -- # return 0 00:06:19.959 02:08:19 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69415 00:06:19.959 02:08:19 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69415 /var/tmp/spdk2.sock 00:06:19.960 02:08:19 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.960 02:08:19 -- common/autotest_common.sh@640 -- # local es=0 00:06:19.960 02:08:19 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69415 /var/tmp/spdk2.sock 00:06:19.960 02:08:19 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:19.960 02:08:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.960 02:08:19 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:19.960 02:08:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:19.960 02:08:19 -- common/autotest_common.sh@643 -- # waitforlisten 69415 /var/tmp/spdk2.sock 00:06:19.960 02:08:19 -- common/autotest_common.sh@819 -- # '[' -z 69415 ']' 00:06:19.960 02:08:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.960 02:08:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.960 02:08:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.960 02:08:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.960 02:08:19 -- common/autotest_common.sh@10 -- # set +x 00:06:19.960 [2024-07-15 02:08:19.417180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:19.960 [2024-07-15 02:08:19.417304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69415 ] 00:06:20.218 [2024-07-15 02:08:19.557438] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69387 has claimed it. 00:06:20.218 [2024-07-15 02:08:19.557509] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.785 ERROR: process (pid: 69415) is no longer running 00:06:20.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69415) - No such process 00:06:20.785 02:08:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.785 02:08:20 -- common/autotest_common.sh@852 -- # return 1 00:06:20.785 02:08:20 -- common/autotest_common.sh@643 -- # es=1 00:06:20.785 02:08:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:20.785 02:08:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:20.785 02:08:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:20.785 02:08:20 -- event/cpu_locks.sh@122 -- # locks_exist 69387 00:06:20.785 02:08:20 -- event/cpu_locks.sh@22 -- # lslocks -p 69387 00:06:20.785 02:08:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.044 02:08:20 -- event/cpu_locks.sh@124 -- # killprocess 69387 00:06:21.044 02:08:20 -- common/autotest_common.sh@926 -- # '[' -z 69387 ']' 00:06:21.044 02:08:20 -- common/autotest_common.sh@930 -- # kill -0 69387 00:06:21.044 02:08:20 -- common/autotest_common.sh@931 -- # uname 00:06:21.044 02:08:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:21.044 02:08:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69387 00:06:21.044 02:08:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:21.044 killing process with pid 69387 00:06:21.044 02:08:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:21.044 02:08:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69387' 00:06:21.044 02:08:20 -- common/autotest_common.sh@945 -- # kill 69387 00:06:21.044 02:08:20 -- common/autotest_common.sh@950 -- # wait 69387 00:06:21.611 00:06:21.611 real 0m2.593s 00:06:21.611 user 0m2.965s 00:06:21.611 sys 0m0.649s 00:06:21.611 02:08:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.611 02:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:21.611 ************************************ 00:06:21.611 END TEST locking_app_on_locked_coremask 00:06:21.611 ************************************ 00:06:21.611 02:08:20 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.611 02:08:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.611 02:08:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.611 02:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:21.611 ************************************ 00:06:21.611 START TEST locking_overlapped_coremask 00:06:21.611 ************************************ 00:06:21.611 02:08:20 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:21.611 02:08:20 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69461 00:06:21.611 02:08:20 -- event/cpu_locks.sh@133 -- # waitforlisten 69461 /var/tmp/spdk.sock 00:06:21.611 02:08:20 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.611 02:08:20 -- common/autotest_common.sh@819 -- # '[' -z 69461 ']' 00:06:21.611 02:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.611 02:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.611 02:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.611 02:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.611 02:08:20 -- common/autotest_common.sh@10 -- # set +x 00:06:21.611 [2024-07-15 02:08:21.059999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:21.611 [2024-07-15 02:08:21.060114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69461 ] 00:06:21.871 [2024-07-15 02:08:21.196468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.871 [2024-07-15 02:08:21.273774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.871 [2024-07-15 02:08:21.273958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.871 [2024-07-15 02:08:21.274321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.871 [2024-07-15 02:08:21.274341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.811 02:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.811 02:08:22 -- common/autotest_common.sh@852 -- # return 0 00:06:22.811 02:08:22 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69491 00:06:22.811 02:08:22 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.811 02:08:22 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69491 /var/tmp/spdk2.sock 00:06:22.811 02:08:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:22.811 02:08:22 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69491 /var/tmp/spdk2.sock 00:06:22.811 02:08:22 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:22.811 02:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:22.811 02:08:22 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:22.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.811 02:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:22.811 02:08:22 -- common/autotest_common.sh@643 -- # waitforlisten 69491 /var/tmp/spdk2.sock 00:06:22.811 02:08:22 -- common/autotest_common.sh@819 -- # '[' -z 69491 ']' 00:06:22.811 02:08:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.811 02:08:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.811 02:08:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.811 02:08:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.811 02:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:22.811 [2024-07-15 02:08:22.094511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:22.811 [2024-07-15 02:08:22.095135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69491 ] 00:06:22.811 [2024-07-15 02:08:22.241162] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69461 has claimed it. 00:06:22.811 [2024-07-15 02:08:22.241219] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.379 ERROR: process (pid: 69491) is no longer running 00:06:23.379 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69491) - No such process 00:06:23.379 02:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.379 02:08:22 -- common/autotest_common.sh@852 -- # return 1 00:06:23.379 02:08:22 -- common/autotest_common.sh@643 -- # es=1 00:06:23.379 02:08:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:23.379 02:08:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:23.379 02:08:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:23.379 02:08:22 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.379 02:08:22 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.379 02:08:22 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.379 02:08:22 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.379 02:08:22 -- event/cpu_locks.sh@141 -- # killprocess 69461 00:06:23.379 02:08:22 -- common/autotest_common.sh@926 -- # '[' -z 69461 ']' 00:06:23.379 02:08:22 -- common/autotest_common.sh@930 -- # kill -0 69461 00:06:23.379 02:08:22 -- common/autotest_common.sh@931 -- # uname 00:06:23.379 02:08:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.379 02:08:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69461 00:06:23.379 killing process with pid 69461 00:06:23.379 02:08:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.379 02:08:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.379 02:08:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69461' 00:06:23.379 02:08:22 -- common/autotest_common.sh@945 -- # kill 69461 00:06:23.379 02:08:22 -- common/autotest_common.sh@950 -- # wait 69461 00:06:23.637 00:06:23.637 real 0m2.192s 00:06:23.637 user 0m6.188s 00:06:23.637 sys 0m0.447s 00:06:23.637 02:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.637 ************************************ 00:06:23.637 END TEST locking_overlapped_coremask 00:06:23.637 ************************************ 00:06:23.637 02:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.895 02:08:23 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.895 02:08:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.895 02:08:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.895 02:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.895 ************************************ 00:06:23.895 START TEST locking_overlapped_coremask_via_rpc 00:06:23.895 ************************************ 00:06:23.895 02:08:23 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:23.895 02:08:23 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69537 00:06:23.895 02:08:23 -- event/cpu_locks.sh@149 -- # waitforlisten 69537 /var/tmp/spdk.sock 00:06:23.895 02:08:23 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.895 02:08:23 -- common/autotest_common.sh@819 -- # '[' -z 69537 ']' 00:06:23.895 02:08:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.895 02:08:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.895 02:08:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.895 02:08:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.895 02:08:23 -- common/autotest_common.sh@10 -- # set +x 00:06:23.895 [2024-07-15 02:08:23.302417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:23.895 [2024-07-15 02:08:23.302535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:06:23.895 [2024-07-15 02:08:23.442727] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.895 [2024-07-15 02:08:23.442770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.154 [2024-07-15 02:08:23.512907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.154 [2024-07-15 02:08:23.513170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.154 [2024-07-15 02:08:23.513548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.154 [2024-07-15 02:08:23.513557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.718 02:08:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.718 02:08:24 -- common/autotest_common.sh@852 -- # return 0 00:06:24.718 02:08:24 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:24.718 02:08:24 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69567 00:06:24.718 02:08:24 -- event/cpu_locks.sh@153 -- # waitforlisten 69567 /var/tmp/spdk2.sock 00:06:24.718 02:08:24 -- common/autotest_common.sh@819 -- # '[' -z 69567 ']' 00:06:24.718 02:08:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.719 02:08:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.719 02:08:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.719 02:08:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.719 02:08:24 -- common/autotest_common.sh@10 -- # set +x 00:06:24.719 [2024-07-15 02:08:24.241883] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:24.719 [2024-07-15 02:08:24.241961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69567 ] 00:06:24.976 [2024-07-15 02:08:24.377958] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.976 [2024-07-15 02:08:24.378005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.976 [2024-07-15 02:08:24.516942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.976 [2024-07-15 02:08:24.517245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.976 [2024-07-15 02:08:24.520730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.976 [2024-07-15 02:08:24.520730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:25.910 02:08:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.910 02:08:25 -- common/autotest_common.sh@852 -- # return 0 00:06:25.910 02:08:25 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.910 02:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.910 02:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.910 02:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.910 02:08:25 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.910 02:08:25 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.910 02:08:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.910 02:08:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:25.910 02:08:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.910 02:08:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:25.910 02:08:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.910 02:08:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:25.910 02:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.910 02:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.910 [2024-07-15 02:08:25.178752] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69537 has claimed it. 00:06:25.910 2024/07/15 02:08:25 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:25.910 request: 00:06:25.910 { 00:06:25.910 "method": "framework_enable_cpumask_locks", 00:06:25.910 "params": {} 00:06:25.910 } 00:06:25.910 Got JSON-RPC error response 00:06:25.910 GoRPCClient: error on JSON-RPC call 00:06:25.910 02:08:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:25.910 02:08:25 -- common/autotest_common.sh@643 -- # es=1 00:06:25.910 02:08:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.910 02:08:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.910 02:08:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.910 02:08:25 -- event/cpu_locks.sh@158 -- # waitforlisten 69537 /var/tmp/spdk.sock 00:06:25.910 02:08:25 -- common/autotest_common.sh@819 -- # '[' -z 69537 ']' 00:06:25.910 02:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.910 02:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.910 02:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.910 02:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.910 02:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:25.910 02:08:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.910 02:08:25 -- common/autotest_common.sh@852 -- # return 0 00:06:25.910 02:08:25 -- event/cpu_locks.sh@159 -- # waitforlisten 69567 /var/tmp/spdk2.sock 00:06:25.910 02:08:25 -- common/autotest_common.sh@819 -- # '[' -z 69567 ']' 00:06:25.910 02:08:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.910 02:08:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.910 02:08:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.910 02:08:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.910 02:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 02:08:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.168 02:08:25 -- common/autotest_common.sh@852 -- # return 0 00:06:26.168 02:08:25 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:26.168 02:08:25 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.168 02:08:25 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.168 02:08:25 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.168 00:06:26.168 real 0m2.430s 00:06:26.168 user 0m1.146s 00:06:26.168 sys 0m0.212s 00:06:26.168 02:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.168 02:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:26.168 ************************************ 00:06:26.168 END TEST locking_overlapped_coremask_via_rpc 00:06:26.168 ************************************ 00:06:26.168 02:08:25 -- event/cpu_locks.sh@174 -- # cleanup 00:06:26.168 02:08:25 -- event/cpu_locks.sh@15 -- # [[ -z 69537 ]] 00:06:26.168 02:08:25 -- event/cpu_locks.sh@15 -- # killprocess 69537 00:06:26.168 02:08:25 -- common/autotest_common.sh@926 -- # '[' -z 69537 ']' 00:06:26.168 02:08:25 -- common/autotest_common.sh@930 -- # kill -0 69537 00:06:26.168 02:08:25 -- common/autotest_common.sh@931 -- # uname 00:06:26.168 02:08:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.168 02:08:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69537 00:06:26.426 02:08:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.426 02:08:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.426 02:08:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69537' 00:06:26.426 killing process with pid 69537 00:06:26.426 02:08:25 -- common/autotest_common.sh@945 -- # kill 69537 00:06:26.426 02:08:25 -- common/autotest_common.sh@950 -- # wait 69537 00:06:26.684 02:08:26 -- event/cpu_locks.sh@16 -- # [[ -z 69567 ]] 00:06:26.684 02:08:26 -- event/cpu_locks.sh@16 -- # killprocess 69567 00:06:26.684 02:08:26 -- common/autotest_common.sh@926 -- # '[' -z 69567 ']' 00:06:26.684 02:08:26 -- common/autotest_common.sh@930 -- # kill -0 69567 00:06:26.684 02:08:26 -- common/autotest_common.sh@931 -- # uname 00:06:26.684 02:08:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.684 02:08:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69567 00:06:26.684 02:08:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:26.684 02:08:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:26.684 killing process with pid 69567 00:06:26.684 02:08:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69567' 00:06:26.684 02:08:26 -- common/autotest_common.sh@945 -- # kill 69567 00:06:26.684 02:08:26 -- common/autotest_common.sh@950 -- # wait 69567 00:06:26.947 02:08:26 -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.947 02:08:26 -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.947 02:08:26 -- event/cpu_locks.sh@15 -- # [[ -z 69537 ]] 00:06:26.947 02:08:26 -- event/cpu_locks.sh@15 -- # killprocess 69537 00:06:26.947 02:08:26 -- common/autotest_common.sh@926 -- # '[' -z 69537 ']' 00:06:26.947 02:08:26 -- common/autotest_common.sh@930 -- # kill -0 69537 00:06:26.947 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69537) - No such process 00:06:26.947 02:08:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69537 is not found' 00:06:26.947 Process with pid 69537 is not found 00:06:26.947 Process with pid 69567 is not found 00:06:26.947 02:08:26 -- event/cpu_locks.sh@16 -- # [[ -z 69567 ]] 00:06:26.947 02:08:26 -- event/cpu_locks.sh@16 -- # killprocess 69567 00:06:26.947 02:08:26 -- common/autotest_common.sh@926 -- # '[' -z 69567 ']' 00:06:26.947 02:08:26 -- common/autotest_common.sh@930 -- # kill -0 69567 00:06:26.947 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69567) - No such process 00:06:26.947 02:08:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69567 is not found' 00:06:26.947 02:08:26 -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.947 ************************************ 00:06:26.947 END TEST cpu_locks 00:06:26.947 ************************************ 00:06:26.947 00:06:26.947 real 0m19.916s 00:06:26.947 user 0m34.357s 00:06:26.947 sys 0m5.490s 00:06:26.947 02:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.947 02:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.218 ************************************ 00:06:27.218 END TEST event 00:06:27.218 ************************************ 00:06:27.218 00:06:27.218 real 0m47.790s 00:06:27.218 user 1m32.420s 00:06:27.218 sys 0m9.209s 00:06:27.218 02:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.218 02:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.218 02:08:26 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:27.218 02:08:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.218 02:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.218 02:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.218 ************************************ 00:06:27.218 START TEST thread 00:06:27.218 ************************************ 00:06:27.218 02:08:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:27.218 * Looking for test storage... 00:06:27.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:27.218 02:08:26 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.218 02:08:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:27.218 02:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.218 02:08:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.218 ************************************ 00:06:27.218 START TEST thread_poller_perf 00:06:27.218 ************************************ 00:06:27.218 02:08:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.218 [2024-07-15 02:08:26.687902] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:27.218 [2024-07-15 02:08:26.688003] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69718 ] 00:06:27.477 [2024-07-15 02:08:26.825034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.477 [2024-07-15 02:08:26.908348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.477 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:28.854 ====================================== 00:06:28.854 busy:2214923982 (cyc) 00:06:28.854 total_run_count: 332000 00:06:28.854 tsc_hz: 2200000000 (cyc) 00:06:28.854 ====================================== 00:06:28.854 poller_cost: 6671 (cyc), 3032 (nsec) 00:06:28.854 00:06:28.854 real 0m1.316s 00:06:28.854 user 0m1.148s 00:06:28.854 sys 0m0.059s 00:06:28.854 02:08:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.854 02:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.854 ************************************ 00:06:28.854 END TEST thread_poller_perf 00:06:28.854 ************************************ 00:06:28.854 02:08:28 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.854 02:08:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:28.854 02:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.854 02:08:28 -- common/autotest_common.sh@10 -- # set +x 00:06:28.854 ************************************ 00:06:28.854 START TEST thread_poller_perf 00:06:28.854 ************************************ 00:06:28.854 02:08:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.854 [2024-07-15 02:08:28.061830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:28.854 [2024-07-15 02:08:28.061929] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:06:28.854 [2024-07-15 02:08:28.199263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.854 [2024-07-15 02:08:28.278072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.854 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.790 ====================================== 00:06:29.790 busy:2202722580 (cyc) 00:06:29.790 total_run_count: 4789000 00:06:29.790 tsc_hz: 2200000000 (cyc) 00:06:29.790 ====================================== 00:06:29.790 poller_cost: 459 (cyc), 208 (nsec) 00:06:29.790 00:06:29.790 real 0m1.291s 00:06:29.790 user 0m1.122s 00:06:29.790 sys 0m0.062s 00:06:29.790 02:08:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.790 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:29.790 ************************************ 00:06:29.790 END TEST thread_poller_perf 00:06:29.790 ************************************ 00:06:30.048 02:08:29 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:30.048 ************************************ 00:06:30.048 END TEST thread 00:06:30.048 ************************************ 00:06:30.048 00:06:30.048 real 0m2.795s 00:06:30.048 user 0m2.344s 00:06:30.048 sys 0m0.223s 00:06:30.048 02:08:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.048 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 02:08:29 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:30.048 02:08:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.048 02:08:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.048 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST accel 00:06:30.048 ************************************ 00:06:30.048 02:08:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:30.048 * Looking for test storage... 00:06:30.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:30.048 02:08:29 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:30.048 02:08:29 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:30.048 02:08:29 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.048 02:08:29 -- accel/accel.sh@59 -- # spdk_tgt_pid=69822 00:06:30.048 02:08:29 -- accel/accel.sh@60 -- # waitforlisten 69822 00:06:30.048 02:08:29 -- common/autotest_common.sh@819 -- # '[' -z 69822 ']' 00:06:30.048 02:08:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.048 02:08:29 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:30.048 02:08:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.048 02:08:29 -- accel/accel.sh@58 -- # build_accel_config 00:06:30.049 02:08:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.049 02:08:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.049 02:08:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.049 02:08:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.049 02:08:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.049 02:08:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.049 02:08:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.049 02:08:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.049 02:08:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.049 02:08:29 -- accel/accel.sh@42 -- # jq -r . 00:06:30.049 [2024-07-15 02:08:29.571647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:30.049 [2024-07-15 02:08:29.571746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69822 ] 00:06:30.308 [2024-07-15 02:08:29.712394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.308 [2024-07-15 02:08:29.789835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.308 [2024-07-15 02:08:29.790039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.242 02:08:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.242 02:08:30 -- common/autotest_common.sh@852 -- # return 0 00:06:31.242 02:08:30 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:31.242 02:08:30 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:31.242 02:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.242 02:08:30 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:31.242 02:08:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.242 02:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.242 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.242 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.242 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.243 02:08:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.243 02:08:30 -- accel/accel.sh@64 -- # IFS== 00:06:31.243 02:08:30 -- accel/accel.sh@64 -- # read -r opc module 00:06:31.243 02:08:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:31.243 02:08:30 -- accel/accel.sh@67 -- # killprocess 69822 00:06:31.243 02:08:30 -- common/autotest_common.sh@926 -- # '[' -z 69822 ']' 00:06:31.243 02:08:30 -- common/autotest_common.sh@930 -- # kill -0 69822 00:06:31.243 02:08:30 -- common/autotest_common.sh@931 -- # uname 00:06:31.243 02:08:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.243 02:08:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69822 00:06:31.243 02:08:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.243 02:08:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.243 02:08:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69822' 00:06:31.243 killing process with pid 69822 00:06:31.243 02:08:30 -- common/autotest_common.sh@945 -- # kill 69822 00:06:31.243 02:08:30 -- common/autotest_common.sh@950 -- # wait 69822 00:06:31.502 02:08:30 -- accel/accel.sh@68 -- # trap - ERR 00:06:31.502 02:08:30 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:31.502 02:08:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:31.502 02:08:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.502 02:08:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.502 02:08:30 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:31.502 02:08:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:31.502 02:08:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.502 02:08:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.502 02:08:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.502 02:08:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.502 02:08:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.502 02:08:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.502 02:08:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.502 02:08:30 -- accel/accel.sh@42 -- # jq -r . 00:06:31.502 02:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.502 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.502 02:08:31 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:31.502 02:08:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:31.502 02:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.502 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:31.761 ************************************ 00:06:31.761 START TEST accel_missing_filename 00:06:31.761 ************************************ 00:06:31.761 02:08:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:31.761 02:08:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:31.761 02:08:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:31.761 02:08:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:31.761 02:08:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.761 02:08:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:31.762 02:08:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.762 02:08:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:31.762 02:08:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:31.762 02:08:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.762 02:08:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.762 02:08:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.762 02:08:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.762 02:08:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.762 02:08:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.762 02:08:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.762 02:08:31 -- accel/accel.sh@42 -- # jq -r . 00:06:31.762 [2024-07-15 02:08:31.086095] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:31.762 [2024-07-15 02:08:31.086212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69891 ] 00:06:31.762 [2024-07-15 02:08:31.218094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.762 [2024-07-15 02:08:31.279349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.020 [2024-07-15 02:08:31.333227] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.020 [2024-07-15 02:08:31.408019] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:32.020 A filename is required. 00:06:32.020 02:08:31 -- common/autotest_common.sh@643 -- # es=234 00:06:32.020 02:08:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:32.020 02:08:31 -- common/autotest_common.sh@652 -- # es=106 00:06:32.020 02:08:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:32.020 02:08:31 -- common/autotest_common.sh@660 -- # es=1 00:06:32.020 ************************************ 00:06:32.020 END TEST accel_missing_filename 00:06:32.020 ************************************ 00:06:32.021 02:08:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:32.021 00:06:32.021 real 0m0.402s 00:06:32.021 user 0m0.235s 00:06:32.021 sys 0m0.113s 00:06:32.021 02:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.021 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 02:08:31 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:32.021 02:08:31 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:32.021 02:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.021 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.021 ************************************ 00:06:32.021 START TEST accel_compress_verify 00:06:32.021 ************************************ 00:06:32.021 02:08:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:32.021 02:08:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:32.021 02:08:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:32.021 02:08:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:32.021 02:08:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.021 02:08:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:32.021 02:08:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.021 02:08:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:32.021 02:08:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:32.021 02:08:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.021 02:08:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.021 02:08:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.021 02:08:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.021 02:08:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.021 02:08:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.021 02:08:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.021 02:08:31 -- accel/accel.sh@42 -- # jq -r . 00:06:32.021 [2024-07-15 02:08:31.539209] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:32.021 [2024-07-15 02:08:31.539334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69910 ] 00:06:32.280 [2024-07-15 02:08:31.676152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.280 [2024-07-15 02:08:31.749344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.280 [2024-07-15 02:08:31.806446] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.539 [2024-07-15 02:08:31.882306] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:32.539 00:06:32.539 Compression does not support the verify option, aborting. 00:06:32.539 02:08:31 -- common/autotest_common.sh@643 -- # es=161 00:06:32.539 02:08:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:32.539 02:08:31 -- common/autotest_common.sh@652 -- # es=33 00:06:32.539 02:08:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:32.539 02:08:31 -- common/autotest_common.sh@660 -- # es=1 00:06:32.539 02:08:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:32.539 00:06:32.539 real 0m0.435s 00:06:32.539 user 0m0.268s 00:06:32.539 sys 0m0.115s 00:06:32.539 02:08:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.539 ************************************ 00:06:32.539 END TEST accel_compress_verify 00:06:32.539 ************************************ 00:06:32.539 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.539 02:08:31 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:32.539 02:08:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:32.539 02:08:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.539 02:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.539 ************************************ 00:06:32.539 START TEST accel_wrong_workload 00:06:32.539 ************************************ 00:06:32.539 02:08:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:32.539 02:08:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:32.539 02:08:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:32.539 02:08:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:32.539 02:08:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.539 02:08:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:32.539 02:08:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.539 02:08:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:32.539 02:08:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:32.539 02:08:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.539 02:08:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.539 02:08:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.539 02:08:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.539 02:08:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.539 02:08:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.539 02:08:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.539 02:08:32 -- accel/accel.sh@42 -- # jq -r . 00:06:32.539 Unsupported workload type: foobar 00:06:32.539 [2024-07-15 02:08:32.025007] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:32.539 accel_perf options: 00:06:32.539 [-h help message] 00:06:32.539 [-q queue depth per core] 00:06:32.539 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:32.539 [-T number of threads per core 00:06:32.539 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:32.539 [-t time in seconds] 00:06:32.539 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:32.539 [ dif_verify, , dif_generate, dif_generate_copy 00:06:32.539 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:32.539 [-l for compress/decompress workloads, name of uncompressed input file 00:06:32.539 [-S for crc32c workload, use this seed value (default 0) 00:06:32.539 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:32.539 [-f for fill workload, use this BYTE value (default 255) 00:06:32.539 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:32.539 [-y verify result if this switch is on] 00:06:32.539 [-a tasks to allocate per core (default: same value as -q)] 00:06:32.539 Can be used to spread operations across a wider range of memory. 00:06:32.539 02:08:32 -- common/autotest_common.sh@643 -- # es=1 00:06:32.539 02:08:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:32.539 02:08:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:32.539 ************************************ 00:06:32.539 END TEST accel_wrong_workload 00:06:32.539 ************************************ 00:06:32.539 02:08:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:32.539 00:06:32.539 real 0m0.029s 00:06:32.539 user 0m0.015s 00:06:32.539 sys 0m0.013s 00:06:32.539 02:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.539 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:06:32.539 02:08:32 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:32.539 02:08:32 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:32.539 02:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.539 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:06:32.539 ************************************ 00:06:32.539 START TEST accel_negative_buffers 00:06:32.539 ************************************ 00:06:32.539 02:08:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:32.539 02:08:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:32.539 02:08:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:32.539 02:08:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:32.539 02:08:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.539 02:08:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:32.540 02:08:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:32.540 02:08:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:32.540 02:08:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:32.540 02:08:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.540 02:08:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.540 02:08:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.540 02:08:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.540 02:08:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.540 02:08:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.540 02:08:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.540 02:08:32 -- accel/accel.sh@42 -- # jq -r . 00:06:32.799 -x option must be non-negative. 00:06:32.799 [2024-07-15 02:08:32.100425] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:32.799 accel_perf options: 00:06:32.799 [-h help message] 00:06:32.799 [-q queue depth per core] 00:06:32.799 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:32.799 [-T number of threads per core 00:06:32.799 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:32.799 [-t time in seconds] 00:06:32.799 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:32.799 [ dif_verify, , dif_generate, dif_generate_copy 00:06:32.799 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:32.799 [-l for compress/decompress workloads, name of uncompressed input file 00:06:32.799 [-S for crc32c workload, use this seed value (default 0) 00:06:32.799 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:32.799 [-f for fill workload, use this BYTE value (default 255) 00:06:32.799 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:32.799 [-y verify result if this switch is on] 00:06:32.799 [-a tasks to allocate per core (default: same value as -q)] 00:06:32.799 Can be used to spread operations across a wider range of memory. 00:06:32.799 ************************************ 00:06:32.799 END TEST accel_negative_buffers 00:06:32.799 ************************************ 00:06:32.799 02:08:32 -- common/autotest_common.sh@643 -- # es=1 00:06:32.799 02:08:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:32.799 02:08:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:32.799 02:08:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:32.799 00:06:32.799 real 0m0.030s 00:06:32.799 user 0m0.021s 00:06:32.799 sys 0m0.009s 00:06:32.799 02:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.799 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:06:32.799 02:08:32 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:32.799 02:08:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:32.799 02:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.799 02:08:32 -- common/autotest_common.sh@10 -- # set +x 00:06:32.799 ************************************ 00:06:32.799 START TEST accel_crc32c 00:06:32.799 ************************************ 00:06:32.799 02:08:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:32.799 02:08:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.799 02:08:32 -- accel/accel.sh@17 -- # local accel_module 00:06:32.799 02:08:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:32.799 02:08:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:32.799 02:08:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.799 02:08:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.799 02:08:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.799 02:08:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.799 02:08:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.799 02:08:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.799 02:08:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.799 02:08:32 -- accel/accel.sh@42 -- # jq -r . 00:06:32.799 [2024-07-15 02:08:32.176668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:32.799 [2024-07-15 02:08:32.176752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69976 ] 00:06:32.799 [2024-07-15 02:08:32.311678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.058 [2024-07-15 02:08:32.396391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.432 02:08:33 -- accel/accel.sh@18 -- # out=' 00:06:34.432 SPDK Configuration: 00:06:34.432 Core mask: 0x1 00:06:34.432 00:06:34.432 Accel Perf Configuration: 00:06:34.432 Workload Type: crc32c 00:06:34.432 CRC-32C seed: 32 00:06:34.432 Transfer size: 4096 bytes 00:06:34.432 Vector count 1 00:06:34.432 Module: software 00:06:34.432 Queue depth: 32 00:06:34.432 Allocate depth: 32 00:06:34.432 # threads/core: 1 00:06:34.432 Run time: 1 seconds 00:06:34.432 Verify: Yes 00:06:34.432 00:06:34.432 Running for 1 seconds... 00:06:34.432 00:06:34.432 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.432 ------------------------------------------------------------------------------------ 00:06:34.432 0,0 493408/s 1927 MiB/s 0 0 00:06:34.432 ==================================================================================== 00:06:34.432 Total 493408/s 1927 MiB/s 0 0' 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.432 02:08:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.432 02:08:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.432 02:08:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.432 02:08:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.432 02:08:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.432 02:08:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.432 02:08:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.432 02:08:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.432 02:08:33 -- accel/accel.sh@42 -- # jq -r . 00:06:34.432 [2024-07-15 02:08:33.625266] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:34.432 [2024-07-15 02:08:33.625356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69996 ] 00:06:34.432 [2024-07-15 02:08:33.760886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.432 [2024-07-15 02:08:33.841414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=0x1 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=crc32c 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=32 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=software 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=32 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=32 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=1 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val=Yes 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:34.432 02:08:33 -- accel/accel.sh@21 -- # val= 00:06:34.432 02:08:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # IFS=: 00:06:34.432 02:08:33 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.804 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.804 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.804 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.804 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.804 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.804 ************************************ 00:06:35.804 END TEST accel_crc32c 00:06:35.804 ************************************ 00:06:35.804 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.804 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.805 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.805 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.805 02:08:35 -- accel/accel.sh@21 -- # val= 00:06:35.805 02:08:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.805 02:08:35 -- accel/accel.sh@20 -- # IFS=: 00:06:35.805 02:08:35 -- accel/accel.sh@20 -- # read -r var val 00:06:35.805 02:08:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.805 02:08:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:35.805 02:08:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.805 00:06:35.805 real 0m2.880s 00:06:35.805 user 0m2.453s 00:06:35.805 sys 0m0.224s 00:06:35.805 02:08:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.805 02:08:35 -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 02:08:35 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:35.805 02:08:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:35.805 02:08:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.805 02:08:35 -- common/autotest_common.sh@10 -- # set +x 00:06:35.805 ************************************ 00:06:35.805 START TEST accel_crc32c_C2 00:06:35.805 ************************************ 00:06:35.805 02:08:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:35.805 02:08:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.805 02:08:35 -- accel/accel.sh@17 -- # local accel_module 00:06:35.805 02:08:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:35.805 02:08:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:35.805 02:08:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.805 02:08:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.805 02:08:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.805 02:08:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.805 02:08:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.805 02:08:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.805 02:08:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.805 02:08:35 -- accel/accel.sh@42 -- # jq -r . 00:06:35.805 [2024-07-15 02:08:35.111282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:35.805 [2024-07-15 02:08:35.111370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70030 ] 00:06:35.805 [2024-07-15 02:08:35.243727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.805 [2024-07-15 02:08:35.303529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.180 02:08:36 -- accel/accel.sh@18 -- # out=' 00:06:37.180 SPDK Configuration: 00:06:37.180 Core mask: 0x1 00:06:37.180 00:06:37.180 Accel Perf Configuration: 00:06:37.180 Workload Type: crc32c 00:06:37.180 CRC-32C seed: 0 00:06:37.180 Transfer size: 4096 bytes 00:06:37.180 Vector count 2 00:06:37.180 Module: software 00:06:37.180 Queue depth: 32 00:06:37.180 Allocate depth: 32 00:06:37.180 # threads/core: 1 00:06:37.180 Run time: 1 seconds 00:06:37.180 Verify: Yes 00:06:37.180 00:06:37.180 Running for 1 seconds... 00:06:37.180 00:06:37.180 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.180 ------------------------------------------------------------------------------------ 00:06:37.180 0,0 382208/s 2986 MiB/s 0 0 00:06:37.180 ==================================================================================== 00:06:37.180 Total 382208/s 1493 MiB/s 0 0' 00:06:37.180 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.180 02:08:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.180 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.180 02:08:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.180 02:08:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.180 02:08:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.180 02:08:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.180 02:08:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.180 02:08:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.180 02:08:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.180 02:08:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.180 02:08:36 -- accel/accel.sh@42 -- # jq -r . 00:06:37.180 [2024-07-15 02:08:36.527538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:37.180 [2024-07-15 02:08:36.527660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70044 ] 00:06:37.180 [2024-07-15 02:08:36.661376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.180 [2024-07-15 02:08:36.718839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=0x1 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=crc32c 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=0 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=software 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=32 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=32 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=1 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val=Yes 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.438 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.438 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.438 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:37.439 02:08:36 -- accel/accel.sh@21 -- # val= 00:06:37.439 02:08:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.439 02:08:36 -- accel/accel.sh@20 -- # IFS=: 00:06:37.439 02:08:36 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 ************************************ 00:06:38.420 END TEST accel_crc32c_C2 00:06:38.420 ************************************ 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@21 -- # val= 00:06:38.420 02:08:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # IFS=: 00:06:38.420 02:08:37 -- accel/accel.sh@20 -- # read -r var val 00:06:38.420 02:08:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.420 02:08:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:38.420 02:08:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.420 00:06:38.420 real 0m2.836s 00:06:38.420 user 0m2.407s 00:06:38.420 sys 0m0.228s 00:06:38.420 02:08:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.420 02:08:37 -- common/autotest_common.sh@10 -- # set +x 00:06:38.678 02:08:37 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:38.678 02:08:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.678 02:08:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.678 02:08:37 -- common/autotest_common.sh@10 -- # set +x 00:06:38.678 ************************************ 00:06:38.678 START TEST accel_copy 00:06:38.678 ************************************ 00:06:38.678 02:08:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:38.678 02:08:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.678 02:08:37 -- accel/accel.sh@17 -- # local accel_module 00:06:38.678 02:08:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:38.678 02:08:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:38.678 02:08:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.678 02:08:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.678 02:08:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.678 02:08:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.678 02:08:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.678 02:08:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.678 02:08:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.678 02:08:37 -- accel/accel.sh@42 -- # jq -r . 00:06:38.678 [2024-07-15 02:08:38.001470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:38.678 [2024-07-15 02:08:38.001573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70083 ] 00:06:38.678 [2024-07-15 02:08:38.140414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.678 [2024-07-15 02:08:38.215782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.051 02:08:39 -- accel/accel.sh@18 -- # out=' 00:06:40.051 SPDK Configuration: 00:06:40.051 Core mask: 0x1 00:06:40.051 00:06:40.051 Accel Perf Configuration: 00:06:40.051 Workload Type: copy 00:06:40.051 Transfer size: 4096 bytes 00:06:40.051 Vector count 1 00:06:40.051 Module: software 00:06:40.051 Queue depth: 32 00:06:40.051 Allocate depth: 32 00:06:40.051 # threads/core: 1 00:06:40.051 Run time: 1 seconds 00:06:40.051 Verify: Yes 00:06:40.051 00:06:40.051 Running for 1 seconds... 00:06:40.051 00:06:40.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.051 ------------------------------------------------------------------------------------ 00:06:40.051 0,0 323424/s 1263 MiB/s 0 0 00:06:40.051 ==================================================================================== 00:06:40.051 Total 323424/s 1263 MiB/s 0 0' 00:06:40.051 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.051 02:08:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:40.051 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.051 02:08:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:40.051 02:08:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.051 02:08:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.051 02:08:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.051 02:08:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.051 02:08:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.051 02:08:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.051 02:08:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.051 02:08:39 -- accel/accel.sh@42 -- # jq -r . 00:06:40.051 [2024-07-15 02:08:39.448534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:40.051 [2024-07-15 02:08:39.448712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70098 ] 00:06:40.051 [2024-07-15 02:08:39.582812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.309 [2024-07-15 02:08:39.647479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=0x1 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=copy 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=software 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=32 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=32 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=1 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val=Yes 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:40.309 02:08:39 -- accel/accel.sh@21 -- # val= 00:06:40.309 02:08:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # IFS=: 00:06:40.309 02:08:39 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@21 -- # val= 00:06:41.684 02:08:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # IFS=: 00:06:41.684 02:08:40 -- accel/accel.sh@20 -- # read -r var val 00:06:41.684 02:08:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.684 02:08:40 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:41.684 02:08:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.684 00:06:41.684 real 0m2.866s 00:06:41.684 user 0m2.431s 00:06:41.684 sys 0m0.229s 00:06:41.684 02:08:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.684 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:06:41.684 ************************************ 00:06:41.684 END TEST accel_copy 00:06:41.684 ************************************ 00:06:41.684 02:08:40 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.684 02:08:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:41.684 02:08:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.684 02:08:40 -- common/autotest_common.sh@10 -- # set +x 00:06:41.684 ************************************ 00:06:41.684 START TEST accel_fill 00:06:41.684 ************************************ 00:06:41.684 02:08:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.684 02:08:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.684 02:08:40 -- accel/accel.sh@17 -- # local accel_module 00:06:41.684 02:08:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.684 02:08:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:41.684 02:08:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.684 02:08:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.684 02:08:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.684 02:08:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.684 02:08:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.684 02:08:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.684 02:08:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.684 02:08:40 -- accel/accel.sh@42 -- # jq -r . 00:06:41.684 [2024-07-15 02:08:40.921431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:41.684 [2024-07-15 02:08:40.921516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:06:41.684 [2024-07-15 02:08:41.051273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.684 [2024-07-15 02:08:41.120331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.059 02:08:42 -- accel/accel.sh@18 -- # out=' 00:06:43.059 SPDK Configuration: 00:06:43.059 Core mask: 0x1 00:06:43.059 00:06:43.059 Accel Perf Configuration: 00:06:43.059 Workload Type: fill 00:06:43.059 Fill pattern: 0x80 00:06:43.059 Transfer size: 4096 bytes 00:06:43.059 Vector count 1 00:06:43.059 Module: software 00:06:43.059 Queue depth: 64 00:06:43.059 Allocate depth: 64 00:06:43.059 # threads/core: 1 00:06:43.059 Run time: 1 seconds 00:06:43.059 Verify: Yes 00:06:43.059 00:06:43.059 Running for 1 seconds... 00:06:43.059 00:06:43.059 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.059 ------------------------------------------------------------------------------------ 00:06:43.059 0,0 526080/s 2055 MiB/s 0 0 00:06:43.059 ==================================================================================== 00:06:43.059 Total 526080/s 2055 MiB/s 0 0' 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:43.059 02:08:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.059 02:08:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.059 02:08:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.059 02:08:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.059 02:08:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.059 02:08:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.059 02:08:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.059 02:08:42 -- accel/accel.sh@42 -- # jq -r . 00:06:43.059 [2024-07-15 02:08:42.322165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:43.059 [2024-07-15 02:08:42.322237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70152 ] 00:06:43.059 [2024-07-15 02:08:42.452683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.059 [2024-07-15 02:08:42.532560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=0x1 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=fill 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=0x80 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=software 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=64 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=64 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=1 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val=Yes 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:43.059 02:08:42 -- accel/accel.sh@21 -- # val= 00:06:43.059 02:08:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # IFS=: 00:06:43.059 02:08:42 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@21 -- # val= 00:06:44.435 02:08:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # IFS=: 00:06:44.435 02:08:43 -- accel/accel.sh@20 -- # read -r var val 00:06:44.435 02:08:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.435 02:08:43 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:44.435 02:08:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.435 00:06:44.435 real 0m2.821s 00:06:44.435 user 0m2.398s 00:06:44.435 sys 0m0.221s 00:06:44.435 ************************************ 00:06:44.435 02:08:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.435 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:06:44.435 END TEST accel_fill 00:06:44.435 ************************************ 00:06:44.435 02:08:43 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:44.435 02:08:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.435 02:08:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.435 02:08:43 -- common/autotest_common.sh@10 -- # set +x 00:06:44.435 ************************************ 00:06:44.435 START TEST accel_copy_crc32c 00:06:44.435 ************************************ 00:06:44.435 02:08:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:44.435 02:08:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.435 02:08:43 -- accel/accel.sh@17 -- # local accel_module 00:06:44.435 02:08:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:44.435 02:08:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:44.435 02:08:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.435 02:08:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.435 02:08:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.435 02:08:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.435 02:08:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.435 02:08:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.435 02:08:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.435 02:08:43 -- accel/accel.sh@42 -- # jq -r . 00:06:44.435 [2024-07-15 02:08:43.794354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:44.435 [2024-07-15 02:08:43.794442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70182 ] 00:06:44.435 [2024-07-15 02:08:43.930503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.435 [2024-07-15 02:08:43.985037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.810 02:08:45 -- accel/accel.sh@18 -- # out=' 00:06:45.810 SPDK Configuration: 00:06:45.810 Core mask: 0x1 00:06:45.810 00:06:45.810 Accel Perf Configuration: 00:06:45.810 Workload Type: copy_crc32c 00:06:45.810 CRC-32C seed: 0 00:06:45.810 Vector size: 4096 bytes 00:06:45.810 Transfer size: 4096 bytes 00:06:45.810 Vector count 1 00:06:45.810 Module: software 00:06:45.810 Queue depth: 32 00:06:45.810 Allocate depth: 32 00:06:45.810 # threads/core: 1 00:06:45.810 Run time: 1 seconds 00:06:45.810 Verify: Yes 00:06:45.810 00:06:45.810 Running for 1 seconds... 00:06:45.810 00:06:45.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.810 ------------------------------------------------------------------------------------ 00:06:45.810 0,0 278368/s 1087 MiB/s 0 0 00:06:45.810 ==================================================================================== 00:06:45.810 Total 278368/s 1087 MiB/s 0 0' 00:06:45.810 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:45.810 02:08:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.810 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:45.810 02:08:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.810 02:08:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.810 02:08:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.810 02:08:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.810 02:08:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.810 02:08:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.810 02:08:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.810 02:08:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.810 02:08:45 -- accel/accel.sh@42 -- # jq -r . 00:06:45.810 [2024-07-15 02:08:45.199826] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:45.810 [2024-07-15 02:08:45.199917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70201 ] 00:06:45.810 [2024-07-15 02:08:45.335397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.070 [2024-07-15 02:08:45.401454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=0x1 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=0 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=software 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=32 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=32 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=1 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val=Yes 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:46.070 02:08:45 -- accel/accel.sh@21 -- # val= 00:06:46.070 02:08:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # IFS=: 00:06:46.070 02:08:45 -- accel/accel.sh@20 -- # read -r var val 00:06:47.044 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.044 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.044 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.044 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.044 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.044 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.044 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.044 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.044 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.303 ************************************ 00:06:47.303 END TEST accel_copy_crc32c 00:06:47.303 ************************************ 00:06:47.303 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.303 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.303 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.303 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.303 02:08:46 -- accel/accel.sh@21 -- # val= 00:06:47.303 02:08:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.303 02:08:46 -- accel/accel.sh@20 -- # IFS=: 00:06:47.303 02:08:46 -- accel/accel.sh@20 -- # read -r var val 00:06:47.303 02:08:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.303 02:08:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:47.303 02:08:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.303 00:06:47.303 real 0m2.828s 00:06:47.303 user 0m2.401s 00:06:47.303 sys 0m0.223s 00:06:47.303 02:08:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.303 02:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.303 02:08:46 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.303 02:08:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.303 02:08:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.303 02:08:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.303 ************************************ 00:06:47.303 START TEST accel_copy_crc32c_C2 00:06:47.303 ************************************ 00:06:47.303 02:08:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.303 02:08:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.303 02:08:46 -- accel/accel.sh@17 -- # local accel_module 00:06:47.303 02:08:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:47.303 02:08:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:47.303 02:08:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.303 02:08:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.303 02:08:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.303 02:08:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.303 02:08:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.303 02:08:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.303 02:08:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.303 02:08:46 -- accel/accel.sh@42 -- # jq -r . 00:06:47.303 [2024-07-15 02:08:46.672062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:47.303 [2024-07-15 02:08:46.672167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70237 ] 00:06:47.304 [2024-07-15 02:08:46.807182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.562 [2024-07-15 02:08:46.880120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.938 02:08:48 -- accel/accel.sh@18 -- # out=' 00:06:48.938 SPDK Configuration: 00:06:48.938 Core mask: 0x1 00:06:48.938 00:06:48.938 Accel Perf Configuration: 00:06:48.938 Workload Type: copy_crc32c 00:06:48.938 CRC-32C seed: 0 00:06:48.938 Vector size: 4096 bytes 00:06:48.938 Transfer size: 8192 bytes 00:06:48.938 Vector count 2 00:06:48.938 Module: software 00:06:48.938 Queue depth: 32 00:06:48.938 Allocate depth: 32 00:06:48.938 # threads/core: 1 00:06:48.938 Run time: 1 seconds 00:06:48.938 Verify: Yes 00:06:48.938 00:06:48.938 Running for 1 seconds... 00:06:48.938 00:06:48.938 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.938 ------------------------------------------------------------------------------------ 00:06:48.938 0,0 199072/s 1555 MiB/s 0 0 00:06:48.938 ==================================================================================== 00:06:48.938 Total 199072/s 777 MiB/s 0 0' 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:48.938 02:08:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.938 02:08:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.938 02:08:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.938 02:08:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.938 02:08:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.938 02:08:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.938 02:08:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.938 02:08:48 -- accel/accel.sh@42 -- # jq -r . 00:06:48.938 [2024-07-15 02:08:48.096892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:48.938 [2024-07-15 02:08:48.097011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70257 ] 00:06:48.938 [2024-07-15 02:08:48.232337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.938 [2024-07-15 02:08:48.303170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=0x1 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=0 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=software 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=32 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=32 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=1 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.938 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.938 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.938 02:08:48 -- accel/accel.sh@21 -- # val=Yes 00:06:48.939 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:08:48 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:08:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:08:48 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.313 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.313 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.313 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.313 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.313 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.313 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.313 02:08:49 -- accel/accel.sh@21 -- # val= 00:06:50.314 02:08:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.314 02:08:49 -- accel/accel.sh@20 -- # IFS=: 00:06:50.314 02:08:49 -- accel/accel.sh@20 -- # read -r var val 00:06:50.314 02:08:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.314 02:08:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:50.314 02:08:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.314 00:06:50.314 real 0m2.875s 00:06:50.314 user 0m1.224s 00:06:50.314 sys 0m0.111s 00:06:50.314 02:08:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.314 02:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.314 ************************************ 00:06:50.314 END TEST accel_copy_crc32c_C2 00:06:50.314 ************************************ 00:06:50.314 02:08:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:50.314 02:08:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:50.314 02:08:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.314 02:08:49 -- common/autotest_common.sh@10 -- # set +x 00:06:50.314 ************************************ 00:06:50.314 START TEST accel_dualcast 00:06:50.314 ************************************ 00:06:50.314 02:08:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:50.314 02:08:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.314 02:08:49 -- accel/accel.sh@17 -- # local accel_module 00:06:50.314 02:08:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:50.314 02:08:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:50.314 02:08:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.314 02:08:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.314 02:08:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.314 02:08:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.314 02:08:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.314 02:08:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.314 02:08:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.314 02:08:49 -- accel/accel.sh@42 -- # jq -r . 00:06:50.314 [2024-07-15 02:08:49.602529] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:50.314 [2024-07-15 02:08:49.602692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70291 ] 00:06:50.314 [2024-07-15 02:08:49.739713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.314 [2024-07-15 02:08:49.815071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.686 02:08:51 -- accel/accel.sh@18 -- # out=' 00:06:51.686 SPDK Configuration: 00:06:51.686 Core mask: 0x1 00:06:51.686 00:06:51.686 Accel Perf Configuration: 00:06:51.686 Workload Type: dualcast 00:06:51.686 Transfer size: 4096 bytes 00:06:51.686 Vector count 1 00:06:51.686 Module: software 00:06:51.686 Queue depth: 32 00:06:51.686 Allocate depth: 32 00:06:51.686 # threads/core: 1 00:06:51.686 Run time: 1 seconds 00:06:51.686 Verify: Yes 00:06:51.686 00:06:51.686 Running for 1 seconds... 00:06:51.686 00:06:51.686 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.686 ------------------------------------------------------------------------------------ 00:06:51.686 0,0 389312/s 1520 MiB/s 0 0 00:06:51.686 ==================================================================================== 00:06:51.686 Total 389312/s 1520 MiB/s 0 0' 00:06:51.686 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.686 02:08:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:51.686 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.686 02:08:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:51.686 02:08:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.686 02:08:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.686 02:08:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.686 02:08:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.686 02:08:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.686 02:08:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.686 02:08:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.686 02:08:51 -- accel/accel.sh@42 -- # jq -r . 00:06:51.686 [2024-07-15 02:08:51.031943] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:51.686 [2024-07-15 02:08:51.032045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70311 ] 00:06:51.686 [2024-07-15 02:08:51.167100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.686 [2024-07-15 02:08:51.230919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=0x1 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=dualcast 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=software 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=32 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=32 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=1 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val=Yes 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:51.946 02:08:51 -- accel/accel.sh@21 -- # val= 00:06:51.946 02:08:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # IFS=: 00:06:51.946 02:08:51 -- accel/accel.sh@20 -- # read -r var val 00:06:52.882 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.882 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.882 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.882 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.882 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.882 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.882 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.883 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.883 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.883 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.883 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.883 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.883 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.883 02:08:52 -- accel/accel.sh@21 -- # val= 00:06:52.883 02:08:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # IFS=: 00:06:52.883 02:08:52 -- accel/accel.sh@20 -- # read -r var val 00:06:52.883 02:08:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.883 02:08:52 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:52.883 02:08:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.883 00:06:52.883 real 0m2.848s 00:06:52.883 user 0m2.426s 00:06:52.883 sys 0m0.219s 00:06:52.883 02:08:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.883 02:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:52.883 ************************************ 00:06:52.883 END TEST accel_dualcast 00:06:52.883 ************************************ 00:06:53.141 02:08:52 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:53.141 02:08:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:53.141 02:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.141 02:08:52 -- common/autotest_common.sh@10 -- # set +x 00:06:53.141 ************************************ 00:06:53.141 START TEST accel_compare 00:06:53.141 ************************************ 00:06:53.141 02:08:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:53.141 02:08:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.141 02:08:52 -- accel/accel.sh@17 -- # local accel_module 00:06:53.141 02:08:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:53.141 02:08:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:53.141 02:08:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.141 02:08:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.141 02:08:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.141 02:08:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.141 02:08:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.141 02:08:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.141 02:08:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.141 02:08:52 -- accel/accel.sh@42 -- # jq -r . 00:06:53.141 [2024-07-15 02:08:52.510884] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:53.141 [2024-07-15 02:08:52.511055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70340 ] 00:06:53.141 [2024-07-15 02:08:52.648558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.399 [2024-07-15 02:08:52.722726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.774 02:08:53 -- accel/accel.sh@18 -- # out=' 00:06:54.774 SPDK Configuration: 00:06:54.774 Core mask: 0x1 00:06:54.774 00:06:54.774 Accel Perf Configuration: 00:06:54.774 Workload Type: compare 00:06:54.774 Transfer size: 4096 bytes 00:06:54.774 Vector count 1 00:06:54.774 Module: software 00:06:54.774 Queue depth: 32 00:06:54.774 Allocate depth: 32 00:06:54.774 # threads/core: 1 00:06:54.774 Run time: 1 seconds 00:06:54.774 Verify: Yes 00:06:54.774 00:06:54.774 Running for 1 seconds... 00:06:54.774 00:06:54.774 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.774 ------------------------------------------------------------------------------------ 00:06:54.774 0,0 531872/s 2077 MiB/s 0 0 00:06:54.774 ==================================================================================== 00:06:54.774 Total 531872/s 2077 MiB/s 0 0' 00:06:54.774 02:08:53 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:54.774 02:08:53 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:54.774 02:08:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.774 02:08:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.774 02:08:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.774 02:08:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.774 02:08:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.774 02:08:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.774 02:08:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.774 02:08:53 -- accel/accel.sh@42 -- # jq -r . 00:06:54.774 [2024-07-15 02:08:53.954358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:54.774 [2024-07-15 02:08:53.954945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70359 ] 00:06:54.774 [2024-07-15 02:08:54.088347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.774 [2024-07-15 02:08:54.158201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=0x1 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=compare 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=software 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=32 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=32 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=1 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val=Yes 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:54.774 02:08:54 -- accel/accel.sh@21 -- # val= 00:06:54.774 02:08:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # IFS=: 00:06:54.774 02:08:54 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.166 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.166 02:08:55 -- accel/accel.sh@21 -- # val= 00:06:56.166 02:08:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.167 02:08:55 -- accel/accel.sh@20 -- # IFS=: 00:06:56.167 02:08:55 -- accel/accel.sh@20 -- # read -r var val 00:06:56.167 02:08:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.167 02:08:55 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:56.167 02:08:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.167 ************************************ 00:06:56.167 END TEST accel_compare 00:06:56.167 ************************************ 00:06:56.167 00:06:56.167 real 0m2.870s 00:06:56.167 user 0m2.446s 00:06:56.167 sys 0m0.217s 00:06:56.167 02:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.167 02:08:55 -- common/autotest_common.sh@10 -- # set +x 00:06:56.167 02:08:55 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:56.167 02:08:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.167 02:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.167 02:08:55 -- common/autotest_common.sh@10 -- # set +x 00:06:56.167 ************************************ 00:06:56.167 START TEST accel_xor 00:06:56.167 ************************************ 00:06:56.167 02:08:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:56.167 02:08:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.167 02:08:55 -- accel/accel.sh@17 -- # local accel_module 00:06:56.167 02:08:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:56.167 02:08:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:56.167 02:08:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.167 02:08:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.167 02:08:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.167 02:08:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.167 02:08:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.167 02:08:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.167 02:08:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.167 02:08:55 -- accel/accel.sh@42 -- # jq -r . 00:06:56.167 [2024-07-15 02:08:55.431504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:56.167 [2024-07-15 02:08:55.431616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70395 ] 00:06:56.167 [2024-07-15 02:08:55.567199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.167 [2024-07-15 02:08:55.635757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.593 02:08:56 -- accel/accel.sh@18 -- # out=' 00:06:57.593 SPDK Configuration: 00:06:57.593 Core mask: 0x1 00:06:57.593 00:06:57.593 Accel Perf Configuration: 00:06:57.593 Workload Type: xor 00:06:57.593 Source buffers: 2 00:06:57.593 Transfer size: 4096 bytes 00:06:57.593 Vector count 1 00:06:57.593 Module: software 00:06:57.593 Queue depth: 32 00:06:57.593 Allocate depth: 32 00:06:57.593 # threads/core: 1 00:06:57.593 Run time: 1 seconds 00:06:57.593 Verify: Yes 00:06:57.593 00:06:57.593 Running for 1 seconds... 00:06:57.593 00:06:57.593 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.593 ------------------------------------------------------------------------------------ 00:06:57.593 0,0 281984/s 1101 MiB/s 0 0 00:06:57.593 ==================================================================================== 00:06:57.593 Total 281984/s 1101 MiB/s 0 0' 00:06:57.593 02:08:56 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:57.593 02:08:56 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.593 02:08:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:57.593 02:08:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.593 02:08:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.593 02:08:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.593 02:08:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.593 02:08:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.593 02:08:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.593 02:08:56 -- accel/accel.sh@42 -- # jq -r . 00:06:57.593 [2024-07-15 02:08:56.860685] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:57.593 [2024-07-15 02:08:56.860786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70414 ] 00:06:57.593 [2024-07-15 02:08:56.991168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.593 [2024-07-15 02:08:57.060665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val=0x1 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val=xor 00:06:57.593 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.593 02:08:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.593 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.593 02:08:57 -- accel/accel.sh@21 -- # val=2 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val=software 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val=32 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val=32 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val=1 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val=Yes 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:57.594 02:08:57 -- accel/accel.sh@21 -- # val= 00:06:57.594 02:08:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # IFS=: 00:06:57.594 02:08:57 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@21 -- # val= 00:06:58.971 02:08:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # IFS=: 00:06:58.971 02:08:58 -- accel/accel.sh@20 -- # read -r var val 00:06:58.971 02:08:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.971 02:08:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:58.971 ************************************ 00:06:58.971 END TEST accel_xor 00:06:58.971 ************************************ 00:06:58.971 02:08:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.971 00:06:58.971 real 0m2.860s 00:06:58.971 user 0m2.438s 00:06:58.971 sys 0m0.216s 00:06:58.971 02:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.971 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 02:08:58 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:58.971 02:08:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.971 02:08:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.971 02:08:58 -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 ************************************ 00:06:58.971 START TEST accel_xor 00:06:58.971 ************************************ 00:06:58.971 02:08:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:58.971 02:08:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.971 02:08:58 -- accel/accel.sh@17 -- # local accel_module 00:06:58.971 02:08:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:58.971 02:08:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:58.971 02:08:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.971 02:08:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.971 02:08:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.971 02:08:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.971 02:08:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.971 02:08:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.971 02:08:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.971 02:08:58 -- accel/accel.sh@42 -- # jq -r . 00:06:58.971 [2024-07-15 02:08:58.346884] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:58.971 [2024-07-15 02:08:58.346975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70449 ] 00:06:58.972 [2024-07-15 02:08:58.486399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.230 [2024-07-15 02:08:58.568119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.605 02:08:59 -- accel/accel.sh@18 -- # out=' 00:07:00.605 SPDK Configuration: 00:07:00.605 Core mask: 0x1 00:07:00.605 00:07:00.605 Accel Perf Configuration: 00:07:00.605 Workload Type: xor 00:07:00.605 Source buffers: 3 00:07:00.605 Transfer size: 4096 bytes 00:07:00.605 Vector count 1 00:07:00.605 Module: software 00:07:00.605 Queue depth: 32 00:07:00.605 Allocate depth: 32 00:07:00.605 # threads/core: 1 00:07:00.605 Run time: 1 seconds 00:07:00.605 Verify: Yes 00:07:00.605 00:07:00.605 Running for 1 seconds... 00:07:00.605 00:07:00.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.605 ------------------------------------------------------------------------------------ 00:07:00.605 0,0 265152/s 1035 MiB/s 0 0 00:07:00.605 ==================================================================================== 00:07:00.605 Total 265152/s 1035 MiB/s 0 0' 00:07:00.605 02:08:59 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:08:59 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:08:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:00.605 02:08:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:00.605 02:08:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.605 02:08:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.605 02:08:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.605 02:08:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.605 02:08:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.605 02:08:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.605 02:08:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.605 02:08:59 -- accel/accel.sh@42 -- # jq -r . 00:07:00.605 [2024-07-15 02:08:59.783843] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:00.605 [2024-07-15 02:08:59.783932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70463 ] 00:07:00.605 [2024-07-15 02:08:59.918678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.605 [2024-07-15 02:08:59.994710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val=0x1 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val=xor 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val=3 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.605 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.605 02:09:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.605 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val=software 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val=32 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val=32 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val=1 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val=Yes 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:00.606 02:09:00 -- accel/accel.sh@21 -- # val= 00:07:00.606 02:09:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # IFS=: 00:07:00.606 02:09:00 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@21 -- # val= 00:07:01.981 02:09:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 02:09:01 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 02:09:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.981 02:09:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:01.981 02:09:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.981 00:07:01.981 real 0m2.873s 00:07:01.981 user 0m2.442s 00:07:01.981 sys 0m0.229s 00:07:01.981 02:09:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.981 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:07:01.981 ************************************ 00:07:01.981 END TEST accel_xor 00:07:01.981 ************************************ 00:07:01.981 02:09:01 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:01.981 02:09:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:01.981 02:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.981 02:09:01 -- common/autotest_common.sh@10 -- # set +x 00:07:01.981 ************************************ 00:07:01.981 START TEST accel_dif_verify 00:07:01.981 ************************************ 00:07:01.981 02:09:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:01.981 02:09:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.981 02:09:01 -- accel/accel.sh@17 -- # local accel_module 00:07:01.981 02:09:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:01.981 02:09:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:01.981 02:09:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.981 02:09:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.981 02:09:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.981 02:09:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.981 02:09:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.981 02:09:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.981 02:09:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.981 02:09:01 -- accel/accel.sh@42 -- # jq -r . 00:07:01.981 [2024-07-15 02:09:01.274326] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:01.981 [2024-07-15 02:09:01.274414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70503 ] 00:07:01.981 [2024-07-15 02:09:01.403487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.981 [2024-07-15 02:09:01.477213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.357 02:09:02 -- accel/accel.sh@18 -- # out=' 00:07:03.357 SPDK Configuration: 00:07:03.357 Core mask: 0x1 00:07:03.357 00:07:03.357 Accel Perf Configuration: 00:07:03.357 Workload Type: dif_verify 00:07:03.357 Vector size: 4096 bytes 00:07:03.357 Transfer size: 4096 bytes 00:07:03.357 Block size: 512 bytes 00:07:03.357 Metadata size: 8 bytes 00:07:03.357 Vector count 1 00:07:03.357 Module: software 00:07:03.357 Queue depth: 32 00:07:03.357 Allocate depth: 32 00:07:03.357 # threads/core: 1 00:07:03.357 Run time: 1 seconds 00:07:03.357 Verify: No 00:07:03.357 00:07:03.357 Running for 1 seconds... 00:07:03.357 00:07:03.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.357 ------------------------------------------------------------------------------------ 00:07:03.357 0,0 108224/s 429 MiB/s 0 0 00:07:03.357 ==================================================================================== 00:07:03.357 Total 108224/s 422 MiB/s 0 0' 00:07:03.357 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.357 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.357 02:09:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.357 02:09:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.357 02:09:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.357 02:09:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.357 02:09:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.357 02:09:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.357 02:09:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.357 02:09:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.357 02:09:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.357 02:09:02 -- accel/accel.sh@42 -- # jq -r . 00:07:03.357 [2024-07-15 02:09:02.698861] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:03.357 [2024-07-15 02:09:02.698968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70517 ] 00:07:03.357 [2024-07-15 02:09:02.834452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.357 [2024-07-15 02:09:02.909578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=0x1 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=dif_verify 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=software 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=32 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=32 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=1 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val=No 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:03.616 02:09:02 -- accel/accel.sh@21 -- # val= 00:07:03.616 02:09:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # IFS=: 00:07:03.616 02:09:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@21 -- # val= 00:07:04.557 02:09:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # IFS=: 00:07:04.557 02:09:04 -- accel/accel.sh@20 -- # read -r var val 00:07:04.557 02:09:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.557 02:09:04 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:04.557 02:09:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.557 00:07:04.557 real 0m2.848s 00:07:04.557 user 0m2.416s 00:07:04.557 sys 0m0.228s 00:07:04.557 02:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.557 02:09:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.557 ************************************ 00:07:04.557 END TEST accel_dif_verify 00:07:04.557 ************************************ 00:07:04.815 02:09:04 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:04.815 02:09:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:04.815 02:09:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.815 02:09:04 -- common/autotest_common.sh@10 -- # set +x 00:07:04.815 ************************************ 00:07:04.815 START TEST accel_dif_generate 00:07:04.815 ************************************ 00:07:04.815 02:09:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:04.815 02:09:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.815 02:09:04 -- accel/accel.sh@17 -- # local accel_module 00:07:04.815 02:09:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:04.815 02:09:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:04.815 02:09:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.815 02:09:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.815 02:09:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.815 02:09:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.815 02:09:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.815 02:09:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.815 02:09:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.815 02:09:04 -- accel/accel.sh@42 -- # jq -r . 00:07:04.815 [2024-07-15 02:09:04.172031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:04.815 [2024-07-15 02:09:04.172094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70551 ] 00:07:04.815 [2024-07-15 02:09:04.302448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.815 [2024-07-15 02:09:04.366314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.193 02:09:05 -- accel/accel.sh@18 -- # out=' 00:07:06.193 SPDK Configuration: 00:07:06.193 Core mask: 0x1 00:07:06.193 00:07:06.193 Accel Perf Configuration: 00:07:06.193 Workload Type: dif_generate 00:07:06.193 Vector size: 4096 bytes 00:07:06.193 Transfer size: 4096 bytes 00:07:06.193 Block size: 512 bytes 00:07:06.193 Metadata size: 8 bytes 00:07:06.193 Vector count 1 00:07:06.193 Module: software 00:07:06.193 Queue depth: 32 00:07:06.193 Allocate depth: 32 00:07:06.193 # threads/core: 1 00:07:06.193 Run time: 1 seconds 00:07:06.193 Verify: No 00:07:06.193 00:07:06.193 Running for 1 seconds... 00:07:06.193 00:07:06.193 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.193 ------------------------------------------------------------------------------------ 00:07:06.193 0,0 138432/s 549 MiB/s 0 0 00:07:06.193 ==================================================================================== 00:07:06.193 Total 138432/s 540 MiB/s 0 0' 00:07:06.193 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.193 02:09:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:06.193 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.193 02:09:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:06.193 02:09:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.193 02:09:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.193 02:09:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.193 02:09:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.193 02:09:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.193 02:09:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.193 02:09:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.193 02:09:05 -- accel/accel.sh@42 -- # jq -r . 00:07:06.193 [2024-07-15 02:09:05.585962] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:06.193 [2024-07-15 02:09:05.586058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70571 ] 00:07:06.193 [2024-07-15 02:09:05.716674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.452 [2024-07-15 02:09:05.778070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=0x1 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=dif_generate 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=software 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=32 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=32 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=1 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.452 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.452 02:09:05 -- accel/accel.sh@21 -- # val=No 00:07:06.452 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.453 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.453 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:06.453 02:09:05 -- accel/accel.sh@21 -- # val= 00:07:06.453 02:09:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # IFS=: 00:07:06.453 02:09:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 ************************************ 00:07:07.827 END TEST accel_dif_generate 00:07:07.827 ************************************ 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@21 -- # val= 00:07:07.827 02:09:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # IFS=: 00:07:07.827 02:09:06 -- accel/accel.sh@20 -- # read -r var val 00:07:07.827 02:09:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.827 02:09:06 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:07.827 02:09:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.827 00:07:07.827 real 0m2.818s 00:07:07.827 user 0m2.409s 00:07:07.827 sys 0m0.210s 00:07:07.827 02:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.827 02:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 02:09:07 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:07.827 02:09:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:07.827 02:09:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.827 02:09:07 -- common/autotest_common.sh@10 -- # set +x 00:07:07.827 ************************************ 00:07:07.827 START TEST accel_dif_generate_copy 00:07:07.827 ************************************ 00:07:07.827 02:09:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:07.827 02:09:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.827 02:09:07 -- accel/accel.sh@17 -- # local accel_module 00:07:07.827 02:09:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:07.827 02:09:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:07.827 02:09:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.827 02:09:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.827 02:09:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.827 02:09:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.827 02:09:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.827 02:09:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.827 02:09:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.827 02:09:07 -- accel/accel.sh@42 -- # jq -r . 00:07:07.827 [2024-07-15 02:09:07.048271] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:07.827 [2024-07-15 02:09:07.048352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70600 ] 00:07:07.827 [2024-07-15 02:09:07.178765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.827 [2024-07-15 02:09:07.261489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.201 02:09:08 -- accel/accel.sh@18 -- # out=' 00:07:09.201 SPDK Configuration: 00:07:09.201 Core mask: 0x1 00:07:09.201 00:07:09.201 Accel Perf Configuration: 00:07:09.201 Workload Type: dif_generate_copy 00:07:09.201 Vector size: 4096 bytes 00:07:09.201 Transfer size: 4096 bytes 00:07:09.201 Vector count 1 00:07:09.201 Module: software 00:07:09.201 Queue depth: 32 00:07:09.201 Allocate depth: 32 00:07:09.201 # threads/core: 1 00:07:09.201 Run time: 1 seconds 00:07:09.201 Verify: No 00:07:09.201 00:07:09.201 Running for 1 seconds... 00:07:09.201 00:07:09.201 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.201 ------------------------------------------------------------------------------------ 00:07:09.201 0,0 97600/s 387 MiB/s 0 0 00:07:09.201 ==================================================================================== 00:07:09.201 Total 97600/s 381 MiB/s 0 0' 00:07:09.201 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.201 02:09:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:09.201 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.201 02:09:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:09.201 02:09:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.201 02:09:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.201 02:09:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.201 02:09:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.201 02:09:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.201 02:09:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.201 02:09:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.201 02:09:08 -- accel/accel.sh@42 -- # jq -r . 00:07:09.201 [2024-07-15 02:09:08.501957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:09.201 [2024-07-15 02:09:08.502091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:07:09.201 [2024-07-15 02:09:08.634551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.201 [2024-07-15 02:09:08.744560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=0x1 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=software 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=32 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=32 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=1 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val=No 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:09.459 02:09:08 -- accel/accel.sh@21 -- # val= 00:07:09.459 02:09:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # IFS=: 00:07:09.459 02:09:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@21 -- # val= 00:07:10.839 02:09:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # IFS=: 00:07:10.839 02:09:09 -- accel/accel.sh@20 -- # read -r var val 00:07:10.839 02:09:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.839 02:09:09 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:10.839 02:09:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.839 00:07:10.839 real 0m2.948s 00:07:10.839 user 0m2.524s 00:07:10.839 sys 0m0.220s 00:07:10.839 02:09:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.839 02:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:10.839 ************************************ 00:07:10.839 END TEST accel_dif_generate_copy 00:07:10.839 ************************************ 00:07:10.839 02:09:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:10.839 02:09:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.839 02:09:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:10.839 02:09:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.839 02:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:10.839 ************************************ 00:07:10.839 START TEST accel_comp 00:07:10.839 ************************************ 00:07:10.839 02:09:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.839 02:09:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.839 02:09:10 -- accel/accel.sh@17 -- # local accel_module 00:07:10.839 02:09:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.839 02:09:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:10.839 02:09:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.839 02:09:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.839 02:09:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.839 02:09:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.839 02:09:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.839 02:09:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.839 02:09:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.839 02:09:10 -- accel/accel.sh@42 -- # jq -r . 00:07:10.839 [2024-07-15 02:09:10.064074] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:10.839 [2024-07-15 02:09:10.064181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70654 ] 00:07:10.839 [2024-07-15 02:09:10.202465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.839 [2024-07-15 02:09:10.308940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.214 02:09:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.214 00:07:12.214 SPDK Configuration: 00:07:12.214 Core mask: 0x1 00:07:12.214 00:07:12.214 Accel Perf Configuration: 00:07:12.214 Workload Type: compress 00:07:12.214 Transfer size: 4096 bytes 00:07:12.214 Vector count 1 00:07:12.214 Module: software 00:07:12.214 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.214 Queue depth: 32 00:07:12.214 Allocate depth: 32 00:07:12.214 # threads/core: 1 00:07:12.214 Run time: 1 seconds 00:07:12.214 Verify: No 00:07:12.214 00:07:12.214 Running for 1 seconds... 00:07:12.214 00:07:12.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.214 ------------------------------------------------------------------------------------ 00:07:12.214 0,0 53408/s 222 MiB/s 0 0 00:07:12.214 ==================================================================================== 00:07:12.215 Total 53408/s 208 MiB/s 0 0' 00:07:12.215 02:09:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.215 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.215 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.215 02:09:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.215 02:09:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.215 02:09:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.215 02:09:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.215 02:09:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.215 02:09:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.215 02:09:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.215 02:09:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.215 02:09:11 -- accel/accel.sh@42 -- # jq -r . 00:07:12.215 [2024-07-15 02:09:11.552855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:12.215 [2024-07-15 02:09:11.552976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70680 ] 00:07:12.215 [2024-07-15 02:09:11.688387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.472 [2024-07-15 02:09:11.782486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=0x1 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=compress 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=software 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=32 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=32 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=1 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val=No 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:12.472 02:09:11 -- accel/accel.sh@21 -- # val= 00:07:12.472 02:09:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # IFS=: 00:07:12.472 02:09:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:12 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:12 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:12 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:12 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:12 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:12 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:13 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:13 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:13 -- accel/accel.sh@21 -- # val= 00:07:13.849 02:09:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.849 02:09:13 -- accel/accel.sh@20 -- # IFS=: 00:07:13.849 02:09:13 -- accel/accel.sh@20 -- # read -r var val 00:07:13.849 02:09:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.849 02:09:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:13.849 02:09:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.849 00:07:13.849 real 0m2.965s 00:07:13.849 user 0m2.517s 00:07:13.849 sys 0m0.245s 00:07:13.849 02:09:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.849 02:09:13 -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 ************************************ 00:07:13.849 END TEST accel_comp 00:07:13.849 ************************************ 00:07:13.849 02:09:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.849 02:09:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:13.849 02:09:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.849 02:09:13 -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 ************************************ 00:07:13.849 START TEST accel_decomp 00:07:13.849 ************************************ 00:07:13.849 02:09:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.849 02:09:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.849 02:09:13 -- accel/accel.sh@17 -- # local accel_module 00:07:13.849 02:09:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.849 02:09:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.849 02:09:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.849 02:09:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.849 02:09:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.849 02:09:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.849 02:09:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.849 02:09:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.849 02:09:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.849 02:09:13 -- accel/accel.sh@42 -- # jq -r . 00:07:13.849 [2024-07-15 02:09:13.083507] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:13.849 [2024-07-15 02:09:13.083659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:07:13.849 [2024-07-15 02:09:13.218650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.849 [2024-07-15 02:09:13.312530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.219 02:09:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:15.219 00:07:15.219 SPDK Configuration: 00:07:15.219 Core mask: 0x1 00:07:15.219 00:07:15.219 Accel Perf Configuration: 00:07:15.219 Workload Type: decompress 00:07:15.219 Transfer size: 4096 bytes 00:07:15.219 Vector count 1 00:07:15.219 Module: software 00:07:15.219 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.219 Queue depth: 32 00:07:15.219 Allocate depth: 32 00:07:15.219 # threads/core: 1 00:07:15.219 Run time: 1 seconds 00:07:15.219 Verify: Yes 00:07:15.219 00:07:15.219 Running for 1 seconds... 00:07:15.219 00:07:15.219 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.219 ------------------------------------------------------------------------------------ 00:07:15.219 0,0 76288/s 140 MiB/s 0 0 00:07:15.219 ==================================================================================== 00:07:15.219 Total 76288/s 298 MiB/s 0 0' 00:07:15.219 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.219 02:09:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.219 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.219 02:09:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.219 02:09:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.219 02:09:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.219 02:09:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.219 02:09:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.219 02:09:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.219 02:09:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.219 02:09:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.219 02:09:14 -- accel/accel.sh@42 -- # jq -r . 00:07:15.219 [2024-07-15 02:09:14.555683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:15.219 [2024-07-15 02:09:14.555956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70730 ] 00:07:15.219 [2024-07-15 02:09:14.689992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.476 [2024-07-15 02:09:14.787682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=0x1 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=decompress 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=software 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=32 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=32 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=1 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val=Yes 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:15.476 02:09:14 -- accel/accel.sh@21 -- # val= 00:07:15.476 02:09:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # IFS=: 00:07:15.476 02:09:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:15 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:15 -- accel/accel.sh@21 -- # val= 00:07:16.868 ************************************ 00:07:16.868 END TEST accel_decomp 00:07:16.868 ************************************ 00:07:16.868 02:09:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.868 02:09:16 -- accel/accel.sh@20 -- # IFS=: 00:07:16.868 02:09:16 -- accel/accel.sh@20 -- # read -r var val 00:07:16.868 02:09:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.869 02:09:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:16.869 02:09:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.869 00:07:16.869 real 0m2.946s 00:07:16.869 user 0m2.514s 00:07:16.869 sys 0m0.230s 00:07:16.869 02:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.869 02:09:16 -- common/autotest_common.sh@10 -- # set +x 00:07:16.869 02:09:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.869 02:09:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:16.869 02:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.869 02:09:16 -- common/autotest_common.sh@10 -- # set +x 00:07:16.869 ************************************ 00:07:16.869 START TEST accel_decmop_full 00:07:16.869 ************************************ 00:07:16.869 02:09:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.869 02:09:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.869 02:09:16 -- accel/accel.sh@17 -- # local accel_module 00:07:16.869 02:09:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.869 02:09:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:16.869 02:09:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.869 02:09:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.869 02:09:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.869 02:09:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.869 02:09:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.869 02:09:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.869 02:09:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.869 02:09:16 -- accel/accel.sh@42 -- # jq -r . 00:07:16.869 [2024-07-15 02:09:16.080387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:16.869 [2024-07-15 02:09:16.080495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70764 ] 00:07:16.869 [2024-07-15 02:09:16.221788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.869 [2024-07-15 02:09:16.327170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.245 02:09:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:18.245 00:07:18.245 SPDK Configuration: 00:07:18.245 Core mask: 0x1 00:07:18.245 00:07:18.245 Accel Perf Configuration: 00:07:18.245 Workload Type: decompress 00:07:18.245 Transfer size: 111250 bytes 00:07:18.245 Vector count 1 00:07:18.245 Module: software 00:07:18.245 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.245 Queue depth: 32 00:07:18.245 Allocate depth: 32 00:07:18.245 # threads/core: 1 00:07:18.245 Run time: 1 seconds 00:07:18.245 Verify: Yes 00:07:18.245 00:07:18.245 Running for 1 seconds... 00:07:18.245 00:07:18.245 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.245 ------------------------------------------------------------------------------------ 00:07:18.245 0,0 4704/s 194 MiB/s 0 0 00:07:18.246 ==================================================================================== 00:07:18.246 Total 4704/s 499 MiB/s 0 0' 00:07:18.246 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.246 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.246 02:09:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.246 02:09:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:18.246 02:09:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.246 02:09:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.246 02:09:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.246 02:09:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.246 02:09:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.246 02:09:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.246 02:09:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.246 02:09:17 -- accel/accel.sh@42 -- # jq -r . 00:07:18.246 [2024-07-15 02:09:17.582487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:18.246 [2024-07-15 02:09:17.582665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:07:18.246 [2024-07-15 02:09:17.716641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.246 [2024-07-15 02:09:17.796314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=0x1 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=decompress 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=software 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=32 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=32 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=1 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val=Yes 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:18.504 02:09:17 -- accel/accel.sh@21 -- # val= 00:07:18.504 02:09:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # IFS=: 00:07:18.504 02:09:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 ************************************ 00:07:19.882 END TEST accel_decmop_full 00:07:19.882 ************************************ 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@21 -- # val= 00:07:19.882 02:09:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 02:09:19 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 02:09:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.882 02:09:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.882 02:09:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.882 00:07:19.882 real 0m2.973s 00:07:19.882 user 0m2.533s 00:07:19.882 sys 0m0.229s 00:07:19.882 02:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.882 02:09:19 -- common/autotest_common.sh@10 -- # set +x 00:07:19.882 02:09:19 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.882 02:09:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:19.882 02:09:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.882 02:09:19 -- common/autotest_common.sh@10 -- # set +x 00:07:19.882 ************************************ 00:07:19.882 START TEST accel_decomp_mcore 00:07:19.882 ************************************ 00:07:19.882 02:09:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.882 02:09:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.882 02:09:19 -- accel/accel.sh@17 -- # local accel_module 00:07:19.882 02:09:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.882 02:09:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.882 02:09:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.882 02:09:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.882 02:09:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.882 02:09:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.882 02:09:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.882 02:09:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.882 02:09:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.882 02:09:19 -- accel/accel.sh@42 -- # jq -r . 00:07:19.882 [2024-07-15 02:09:19.110779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:19.882 [2024-07-15 02:09:19.110920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70818 ] 00:07:19.882 [2024-07-15 02:09:19.248547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.882 [2024-07-15 02:09:19.346211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.882 [2024-07-15 02:09:19.346273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.882 [2024-07-15 02:09:19.346397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.882 [2024-07-15 02:09:19.346401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.260 02:09:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.260 00:07:21.260 SPDK Configuration: 00:07:21.260 Core mask: 0xf 00:07:21.260 00:07:21.260 Accel Perf Configuration: 00:07:21.260 Workload Type: decompress 00:07:21.260 Transfer size: 4096 bytes 00:07:21.260 Vector count 1 00:07:21.260 Module: software 00:07:21.260 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.260 Queue depth: 32 00:07:21.260 Allocate depth: 32 00:07:21.260 # threads/core: 1 00:07:21.260 Run time: 1 seconds 00:07:21.260 Verify: Yes 00:07:21.260 00:07:21.260 Running for 1 seconds... 00:07:21.260 00:07:21.260 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.260 ------------------------------------------------------------------------------------ 00:07:21.260 0,0 58304/s 107 MiB/s 0 0 00:07:21.260 3,0 57888/s 106 MiB/s 0 0 00:07:21.260 2,0 55264/s 101 MiB/s 0 0 00:07:21.260 1,0 59040/s 108 MiB/s 0 0 00:07:21.260 ==================================================================================== 00:07:21.260 Total 230496/s 900 MiB/s 0 0' 00:07:21.260 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.260 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.260 02:09:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.260 02:09:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.260 02:09:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.260 02:09:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.260 02:09:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.260 02:09:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.260 02:09:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.260 02:09:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.260 02:09:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.260 02:09:20 -- accel/accel.sh@42 -- # jq -r . 00:07:21.260 [2024-07-15 02:09:20.621199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:21.260 [2024-07-15 02:09:20.621559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70841 ] 00:07:21.260 [2024-07-15 02:09:20.759508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.520 [2024-07-15 02:09:20.835723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.520 [2024-07-15 02:09:20.835849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.520 [2024-07-15 02:09:20.835962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.520 [2024-07-15 02:09:20.835968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=0xf 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=decompress 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=software 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=32 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=32 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=1 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val=Yes 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:21.520 02:09:20 -- accel/accel.sh@21 -- # val= 00:07:21.520 02:09:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # IFS=: 00:07:21.520 02:09:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@21 -- # val= 00:07:22.894 02:09:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # IFS=: 00:07:22.894 02:09:22 -- accel/accel.sh@20 -- # read -r var val 00:07:22.894 02:09:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.894 02:09:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.894 02:09:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.894 00:07:22.894 real 0m2.999s 00:07:22.894 user 0m9.392s 00:07:22.894 sys 0m0.262s 00:07:22.894 02:09:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.894 ************************************ 00:07:22.894 END TEST accel_decomp_mcore 00:07:22.894 02:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.894 ************************************ 00:07:22.894 02:09:22 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.894 02:09:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:22.894 02:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.894 02:09:22 -- common/autotest_common.sh@10 -- # set +x 00:07:22.894 ************************************ 00:07:22.894 START TEST accel_decomp_full_mcore 00:07:22.894 ************************************ 00:07:22.894 02:09:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.894 02:09:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.894 02:09:22 -- accel/accel.sh@17 -- # local accel_module 00:07:22.894 02:09:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.894 02:09:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.894 02:09:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.894 02:09:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.894 02:09:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.894 02:09:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.894 02:09:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.894 02:09:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.894 02:09:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.894 02:09:22 -- accel/accel.sh@42 -- # jq -r . 00:07:22.894 [2024-07-15 02:09:22.155755] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:22.894 [2024-07-15 02:09:22.155846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70878 ] 00:07:22.894 [2024-07-15 02:09:22.288688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.894 [2024-07-15 02:09:22.382592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.894 [2024-07-15 02:09:22.382749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.894 [2024-07-15 02:09:22.383509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.894 [2024-07-15 02:09:22.383565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.330 02:09:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.330 00:07:24.330 SPDK Configuration: 00:07:24.330 Core mask: 0xf 00:07:24.330 00:07:24.330 Accel Perf Configuration: 00:07:24.330 Workload Type: decompress 00:07:24.330 Transfer size: 111250 bytes 00:07:24.330 Vector count 1 00:07:24.330 Module: software 00:07:24.330 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.330 Queue depth: 32 00:07:24.330 Allocate depth: 32 00:07:24.330 # threads/core: 1 00:07:24.330 Run time: 1 seconds 00:07:24.330 Verify: Yes 00:07:24.330 00:07:24.330 Running for 1 seconds... 00:07:24.330 00:07:24.330 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.330 ------------------------------------------------------------------------------------ 00:07:24.330 0,0 4672/s 192 MiB/s 0 0 00:07:24.330 3,0 4672/s 192 MiB/s 0 0 00:07:24.330 2,0 4672/s 192 MiB/s 0 0 00:07:24.330 1,0 4704/s 194 MiB/s 0 0 00:07:24.330 ==================================================================================== 00:07:24.330 Total 18720/s 1986 MiB/s 0 0' 00:07:24.330 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.330 02:09:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.330 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.330 02:09:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.330 02:09:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.330 02:09:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.330 02:09:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.330 02:09:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.330 02:09:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.330 02:09:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.330 02:09:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.330 02:09:23 -- accel/accel.sh@42 -- # jq -r . 00:07:24.330 [2024-07-15 02:09:23.623551] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:24.330 [2024-07-15 02:09:23.623668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70901 ] 00:07:24.330 [2024-07-15 02:09:23.751825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.330 [2024-07-15 02:09:23.825431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.330 [2024-07-15 02:09:23.825547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.330 [2024-07-15 02:09:23.826637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.330 [2024-07-15 02:09:23.826641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=0xf 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=decompress 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=software 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=32 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=32 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=1 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.588 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.588 02:09:23 -- accel/accel.sh@21 -- # val=Yes 00:07:24.588 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.589 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.589 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:24.589 02:09:23 -- accel/accel.sh@21 -- # val= 00:07:24.589 02:09:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # IFS=: 00:07:24.589 02:09:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@21 -- # val= 00:07:25.523 02:09:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # IFS=: 00:07:25.523 02:09:25 -- accel/accel.sh@20 -- # read -r var val 00:07:25.523 02:09:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.523 02:09:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:25.523 02:09:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.523 00:07:25.523 real 0m2.923s 00:07:25.523 user 0m9.328s 00:07:25.523 sys 0m0.262s 00:07:25.523 02:09:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.523 02:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.523 ************************************ 00:07:25.523 END TEST accel_decomp_full_mcore 00:07:25.523 ************************************ 00:07:25.781 02:09:25 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.781 02:09:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:25.781 02:09:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.781 02:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:25.781 ************************************ 00:07:25.781 START TEST accel_decomp_mthread 00:07:25.781 ************************************ 00:07:25.781 02:09:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.781 02:09:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.781 02:09:25 -- accel/accel.sh@17 -- # local accel_module 00:07:25.781 02:09:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.781 02:09:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.781 02:09:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.781 02:09:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.781 02:09:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.781 02:09:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.781 02:09:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.781 02:09:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.781 02:09:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.781 02:09:25 -- accel/accel.sh@42 -- # jq -r . 00:07:25.781 [2024-07-15 02:09:25.120226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:25.781 [2024-07-15 02:09:25.120316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70933 ] 00:07:25.781 [2024-07-15 02:09:25.254794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.781 [2024-07-15 02:09:25.316264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.154 02:09:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.154 00:07:27.154 SPDK Configuration: 00:07:27.154 Core mask: 0x1 00:07:27.154 00:07:27.154 Accel Perf Configuration: 00:07:27.154 Workload Type: decompress 00:07:27.154 Transfer size: 4096 bytes 00:07:27.154 Vector count 1 00:07:27.154 Module: software 00:07:27.154 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.154 Queue depth: 32 00:07:27.154 Allocate depth: 32 00:07:27.154 # threads/core: 2 00:07:27.155 Run time: 1 seconds 00:07:27.155 Verify: Yes 00:07:27.155 00:07:27.155 Running for 1 seconds... 00:07:27.155 00:07:27.155 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.155 ------------------------------------------------------------------------------------ 00:07:27.155 0,1 38304/s 70 MiB/s 0 0 00:07:27.155 0,0 38144/s 70 MiB/s 0 0 00:07:27.155 ==================================================================================== 00:07:27.155 Total 76448/s 298 MiB/s 0 0' 00:07:27.155 02:09:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.155 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.155 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.155 02:09:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:27.155 02:09:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.155 02:09:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.155 02:09:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.155 02:09:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.155 02:09:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.155 02:09:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.155 02:09:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.155 02:09:26 -- accel/accel.sh@42 -- # jq -r . 00:07:27.155 [2024-07-15 02:09:26.527113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:27.155 [2024-07-15 02:09:26.527204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70952 ] 00:07:27.155 [2024-07-15 02:09:26.662214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.413 [2024-07-15 02:09:26.735301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=0x1 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=decompress 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=software 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=32 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=32 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=2 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val=Yes 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:27.413 02:09:26 -- accel/accel.sh@21 -- # val= 00:07:27.413 02:09:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # IFS=: 00:07:27.413 02:09:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@21 -- # val= 00:07:28.790 02:09:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # IFS=: 00:07:28.790 02:09:27 -- accel/accel.sh@20 -- # read -r var val 00:07:28.790 02:09:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.790 ************************************ 00:07:28.790 END TEST accel_decomp_mthread 00:07:28.790 ************************************ 00:07:28.790 02:09:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.790 02:09:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.790 00:07:28.790 real 0m2.861s 00:07:28.790 user 0m2.424s 00:07:28.790 sys 0m0.236s 00:07:28.790 02:09:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.790 02:09:27 -- common/autotest_common.sh@10 -- # set +x 00:07:28.790 02:09:28 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.790 02:09:28 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:28.790 02:09:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.790 02:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:28.790 ************************************ 00:07:28.790 START TEST accel_deomp_full_mthread 00:07:28.790 ************************************ 00:07:28.790 02:09:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.790 02:09:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.790 02:09:28 -- accel/accel.sh@17 -- # local accel_module 00:07:28.790 02:09:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.790 02:09:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:28.790 02:09:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.790 02:09:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.790 02:09:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.790 02:09:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.790 02:09:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.790 02:09:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.790 02:09:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.790 02:09:28 -- accel/accel.sh@42 -- # jq -r . 00:07:28.790 [2024-07-15 02:09:28.036644] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:28.790 [2024-07-15 02:09:28.036760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70987 ] 00:07:28.790 [2024-07-15 02:09:28.170869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.790 [2024-07-15 02:09:28.258939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.166 02:09:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.166 00:07:30.166 SPDK Configuration: 00:07:30.166 Core mask: 0x1 00:07:30.166 00:07:30.167 Accel Perf Configuration: 00:07:30.167 Workload Type: decompress 00:07:30.167 Transfer size: 111250 bytes 00:07:30.167 Vector count 1 00:07:30.167 Module: software 00:07:30.167 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.167 Queue depth: 32 00:07:30.167 Allocate depth: 32 00:07:30.167 # threads/core: 2 00:07:30.167 Run time: 1 seconds 00:07:30.167 Verify: Yes 00:07:30.167 00:07:30.167 Running for 1 seconds... 00:07:30.167 00:07:30.167 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.167 ------------------------------------------------------------------------------------ 00:07:30.167 0,1 2304/s 95 MiB/s 0 0 00:07:30.167 0,0 2272/s 93 MiB/s 0 0 00:07:30.167 ==================================================================================== 00:07:30.167 Total 4576/s 485 MiB/s 0 0' 00:07:30.167 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.167 02:09:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.167 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.167 02:09:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.167 02:09:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.167 02:09:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.167 02:09:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.167 02:09:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.167 02:09:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.167 02:09:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.167 02:09:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.167 02:09:29 -- accel/accel.sh@42 -- # jq -r . 00:07:30.167 [2024-07-15 02:09:29.512763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:30.167 [2024-07-15 02:09:29.512841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:07:30.167 [2024-07-15 02:09:29.642506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.425 [2024-07-15 02:09:29.734040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.425 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.425 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=0x1 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=decompress 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=software 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=32 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=32 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=2 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val=Yes 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:30.426 02:09:29 -- accel/accel.sh@21 -- # val= 00:07:30.426 02:09:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # IFS=: 00:07:30.426 02:09:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@21 -- # val= 00:07:31.827 02:09:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # IFS=: 00:07:31.827 02:09:30 -- accel/accel.sh@20 -- # read -r var val 00:07:31.827 02:09:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.827 02:09:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.827 ************************************ 00:07:31.827 END TEST accel_deomp_full_mthread 00:07:31.827 ************************************ 00:07:31.827 02:09:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.827 00:07:31.827 real 0m2.962s 00:07:31.827 user 0m2.534s 00:07:31.827 sys 0m0.227s 00:07:31.827 02:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.827 02:09:30 -- common/autotest_common.sh@10 -- # set +x 00:07:31.827 02:09:31 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:31.827 02:09:31 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.827 02:09:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:31.827 02:09:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.827 02:09:31 -- accel/accel.sh@129 -- # build_accel_config 00:07:31.827 02:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:31.827 02:09:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.827 02:09:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.827 02:09:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.827 02:09:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.827 02:09:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.827 02:09:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.827 02:09:31 -- accel/accel.sh@42 -- # jq -r . 00:07:31.827 ************************************ 00:07:31.827 START TEST accel_dif_functional_tests 00:07:31.827 ************************************ 00:07:31.827 02:09:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:31.827 [2024-07-15 02:09:31.083156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:31.827 [2024-07-15 02:09:31.083268] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71042 ] 00:07:31.827 [2024-07-15 02:09:31.220759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.827 [2024-07-15 02:09:31.304893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.827 [2024-07-15 02:09:31.305031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.827 [2024-07-15 02:09:31.305036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.102 00:07:32.102 00:07:32.102 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.102 http://cunit.sourceforge.net/ 00:07:32.102 00:07:32.102 00:07:32.102 Suite: accel_dif 00:07:32.102 Test: verify: DIF generated, GUARD check ...passed 00:07:32.102 Test: verify: DIF generated, APPTAG check ...passed 00:07:32.102 Test: verify: DIF generated, REFTAG check ...passed 00:07:32.102 Test: verify: DIF not generated, GUARD check ...passed 00:07:32.102 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 02:09:31.394575] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.102 [2024-07-15 02:09:31.394744] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:32.102 [2024-07-15 02:09:31.394785] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.102 passed 00:07:32.102 Test: verify: DIF not generated, REFTAG check ...passed 00:07:32.102 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:32.102 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 02:09:31.394815] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:32.102 [2024-07-15 02:09:31.394842] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.102 [2024-07-15 02:09:31.394913] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:32.102 [2024-07-15 02:09:31.394982] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:32.102 passed 00:07:32.102 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:32.102 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:32.102 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:32.102 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:32.102 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 02:09:31.395290] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:32.102 passed 00:07:32.102 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:32.102 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:32.102 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:32.102 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:32.102 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:32.102 Test: generate copy: iovecs-len validate ...passed 00:07:32.102 Test: generate copy: buffer alignment validate ...[2024-07-15 02:09:31.395884] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:32.102 passed 00:07:32.102 00:07:32.102 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.102 suites 1 1 n/a 0 0 00:07:32.102 tests 20 20 20 0 0 00:07:32.102 asserts 204 204 204 0 n/a 00:07:32.102 00:07:32.102 Elapsed time = 0.005 seconds 00:07:32.102 00:07:32.102 real 0m0.576s 00:07:32.102 user 0m0.771s 00:07:32.102 sys 0m0.148s 00:07:32.102 02:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.102 ************************************ 00:07:32.102 END TEST accel_dif_functional_tests 00:07:32.102 ************************************ 00:07:32.102 02:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 00:07:32.375 real 1m2.221s 00:07:32.375 user 1m6.389s 00:07:32.375 sys 0m6.171s 00:07:32.375 02:09:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.375 ************************************ 00:07:32.375 END TEST accel 00:07:32.375 ************************************ 00:07:32.375 02:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 02:09:31 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:32.375 02:09:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.375 02:09:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.375 02:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 ************************************ 00:07:32.375 START TEST accel_rpc 00:07:32.375 ************************************ 00:07:32.375 02:09:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:32.375 * Looking for test storage... 00:07:32.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:32.375 02:09:31 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.375 02:09:31 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71111 00:07:32.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.375 02:09:31 -- accel/accel_rpc.sh@15 -- # waitforlisten 71111 00:07:32.375 02:09:31 -- common/autotest_common.sh@819 -- # '[' -z 71111 ']' 00:07:32.375 02:09:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.375 02:09:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.375 02:09:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.375 02:09:31 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:32.375 02:09:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.375 02:09:31 -- common/autotest_common.sh@10 -- # set +x 00:07:32.375 [2024-07-15 02:09:31.831043] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:32.375 [2024-07-15 02:09:31.831156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71111 ] 00:07:32.633 [2024-07-15 02:09:31.968994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.633 [2024-07-15 02:09:32.065139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.633 [2024-07-15 02:09:32.065314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.568 02:09:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.568 02:09:32 -- common/autotest_common.sh@852 -- # return 0 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:33.568 02:09:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:33.568 02:09:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.568 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 ************************************ 00:07:33.568 START TEST accel_assign_opcode 00:07:33.568 ************************************ 00:07:33.568 02:09:32 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:33.568 02:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.568 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 [2024-07-15 02:09:32.813911] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:33.568 02:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:33.568 02:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.568 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 [2024-07-15 02:09:32.821881] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:33.568 02:09:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.568 02:09:32 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:33.568 02:09:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.568 02:09:32 -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 02:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.568 02:09:33 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:33.568 02:09:33 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:33.568 02:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.568 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:33.568 02:09:33 -- accel/accel_rpc.sh@42 -- # grep software 00:07:33.568 02:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.568 software 00:07:33.568 ************************************ 00:07:33.568 END TEST accel_assign_opcode 00:07:33.568 ************************************ 00:07:33.568 00:07:33.568 real 0m0.288s 00:07:33.568 user 0m0.056s 00:07:33.568 sys 0m0.009s 00:07:33.568 02:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.568 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 02:09:33 -- accel/accel_rpc.sh@55 -- # killprocess 71111 00:07:33.824 02:09:33 -- common/autotest_common.sh@926 -- # '[' -z 71111 ']' 00:07:33.824 02:09:33 -- common/autotest_common.sh@930 -- # kill -0 71111 00:07:33.824 02:09:33 -- common/autotest_common.sh@931 -- # uname 00:07:33.824 02:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:33.824 02:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71111 00:07:33.824 killing process with pid 71111 00:07:33.824 02:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:33.824 02:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:33.824 02:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71111' 00:07:33.824 02:09:33 -- common/autotest_common.sh@945 -- # kill 71111 00:07:33.824 02:09:33 -- common/autotest_common.sh@950 -- # wait 71111 00:07:34.080 00:07:34.080 real 0m1.840s 00:07:34.080 user 0m1.934s 00:07:34.080 sys 0m0.443s 00:07:34.080 ************************************ 00:07:34.080 END TEST accel_rpc 00:07:34.080 ************************************ 00:07:34.080 02:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.080 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.080 02:09:33 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:34.080 02:09:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:34.080 02:09:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.080 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.080 ************************************ 00:07:34.080 START TEST app_cmdline 00:07:34.080 ************************************ 00:07:34.080 02:09:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:34.080 * Looking for test storage... 00:07:34.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:34.080 02:09:33 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:34.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.080 02:09:33 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71214 00:07:34.080 02:09:33 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:34.080 02:09:33 -- app/cmdline.sh@18 -- # waitforlisten 71214 00:07:34.080 02:09:33 -- common/autotest_common.sh@819 -- # '[' -z 71214 ']' 00:07:34.080 02:09:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.080 02:09:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.080 02:09:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.080 02:09:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.080 02:09:33 -- common/autotest_common.sh@10 -- # set +x 00:07:34.336 [2024-07-15 02:09:33.688183] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:34.336 [2024-07-15 02:09:33.688292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71214 ] 00:07:34.336 [2024-07-15 02:09:33.822725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.592 [2024-07-15 02:09:33.920290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.592 [2024-07-15 02:09:33.920545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.158 02:09:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.158 02:09:34 -- common/autotest_common.sh@852 -- # return 0 00:07:35.158 02:09:34 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:35.416 { 00:07:35.416 "fields": { 00:07:35.416 "commit": "4b94202c6", 00:07:35.416 "major": 24, 00:07:35.416 "minor": 1, 00:07:35.416 "patch": 1, 00:07:35.416 "suffix": "-pre" 00:07:35.416 }, 00:07:35.416 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:07:35.416 } 00:07:35.416 02:09:34 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:35.416 02:09:34 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:35.416 02:09:34 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:35.416 02:09:34 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:35.416 02:09:34 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:35.416 02:09:34 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:35.416 02:09:34 -- app/cmdline.sh@26 -- # sort 00:07:35.416 02:09:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.416 02:09:34 -- common/autotest_common.sh@10 -- # set +x 00:07:35.416 02:09:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.416 02:09:34 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:35.416 02:09:34 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:35.416 02:09:34 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.416 02:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:07:35.416 02:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.416 02:09:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.416 02:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:35.416 02:09:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.416 02:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:35.416 02:09:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.416 02:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:35.416 02:09:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.416 02:09:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:35.416 02:09:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.674 2024/07/15 02:09:35 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:35.674 request: 00:07:35.674 { 00:07:35.674 "method": "env_dpdk_get_mem_stats", 00:07:35.674 "params": {} 00:07:35.674 } 00:07:35.674 Got JSON-RPC error response 00:07:35.674 GoRPCClient: error on JSON-RPC call 00:07:35.674 02:09:35 -- common/autotest_common.sh@643 -- # es=1 00:07:35.674 02:09:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:35.674 02:09:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:35.674 02:09:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:35.674 02:09:35 -- app/cmdline.sh@1 -- # killprocess 71214 00:07:35.674 02:09:35 -- common/autotest_common.sh@926 -- # '[' -z 71214 ']' 00:07:35.674 02:09:35 -- common/autotest_common.sh@930 -- # kill -0 71214 00:07:35.674 02:09:35 -- common/autotest_common.sh@931 -- # uname 00:07:35.674 02:09:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:35.674 02:09:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71214 00:07:35.674 02:09:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:35.674 02:09:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:35.674 killing process with pid 71214 00:07:35.674 02:09:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71214' 00:07:35.674 02:09:35 -- common/autotest_common.sh@945 -- # kill 71214 00:07:35.674 02:09:35 -- common/autotest_common.sh@950 -- # wait 71214 00:07:36.241 00:07:36.241 real 0m1.929s 00:07:36.241 user 0m2.321s 00:07:36.241 sys 0m0.476s 00:07:36.241 02:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.241 ************************************ 00:07:36.241 END TEST app_cmdline 00:07:36.241 ************************************ 00:07:36.241 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.241 02:09:35 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:36.241 02:09:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.241 02:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.241 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.241 ************************************ 00:07:36.241 START TEST version 00:07:36.241 ************************************ 00:07:36.241 02:09:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:36.241 * Looking for test storage... 00:07:36.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:36.241 02:09:35 -- app/version.sh@17 -- # get_header_version major 00:07:36.241 02:09:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.241 02:09:35 -- app/version.sh@14 -- # cut -f2 00:07:36.241 02:09:35 -- app/version.sh@14 -- # tr -d '"' 00:07:36.241 02:09:35 -- app/version.sh@17 -- # major=24 00:07:36.241 02:09:35 -- app/version.sh@18 -- # get_header_version minor 00:07:36.241 02:09:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.241 02:09:35 -- app/version.sh@14 -- # cut -f2 00:07:36.241 02:09:35 -- app/version.sh@14 -- # tr -d '"' 00:07:36.241 02:09:35 -- app/version.sh@18 -- # minor=1 00:07:36.241 02:09:35 -- app/version.sh@19 -- # get_header_version patch 00:07:36.241 02:09:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.241 02:09:35 -- app/version.sh@14 -- # cut -f2 00:07:36.241 02:09:35 -- app/version.sh@14 -- # tr -d '"' 00:07:36.241 02:09:35 -- app/version.sh@19 -- # patch=1 00:07:36.241 02:09:35 -- app/version.sh@20 -- # get_header_version suffix 00:07:36.241 02:09:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.241 02:09:35 -- app/version.sh@14 -- # cut -f2 00:07:36.241 02:09:35 -- app/version.sh@14 -- # tr -d '"' 00:07:36.241 02:09:35 -- app/version.sh@20 -- # suffix=-pre 00:07:36.241 02:09:35 -- app/version.sh@22 -- # version=24.1 00:07:36.241 02:09:35 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:36.241 02:09:35 -- app/version.sh@25 -- # version=24.1.1 00:07:36.241 02:09:35 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:36.241 02:09:35 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:36.241 02:09:35 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:36.241 02:09:35 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:36.241 02:09:35 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:36.241 00:07:36.241 real 0m0.140s 00:07:36.241 user 0m0.084s 00:07:36.241 sys 0m0.088s 00:07:36.241 02:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.241 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.241 ************************************ 00:07:36.241 END TEST version 00:07:36.241 ************************************ 00:07:36.241 02:09:35 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:36.241 02:09:35 -- spdk/autotest.sh@204 -- # uname -s 00:07:36.241 02:09:35 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:36.242 02:09:35 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:36.242 02:09:35 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:36.242 02:09:35 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:36.242 02:09:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:36.242 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.242 02:09:35 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:36.242 02:09:35 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:36.242 02:09:35 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.242 02:09:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:36.242 02:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.242 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.242 ************************************ 00:07:36.242 START TEST nvmf_tcp 00:07:36.242 ************************************ 00:07:36.242 02:09:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.501 * Looking for test storage... 00:07:36.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.501 02:09:35 -- nvmf/common.sh@7 -- # uname -s 00:07:36.501 02:09:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.501 02:09:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.501 02:09:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.501 02:09:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.501 02:09:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.501 02:09:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.501 02:09:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.501 02:09:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.501 02:09:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.501 02:09:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.501 02:09:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:36.501 02:09:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:36.501 02:09:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.501 02:09:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.501 02:09:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.501 02:09:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.501 02:09:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.501 02:09:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.501 02:09:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.501 02:09:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@5 -- # export PATH 00:07:36.501 02:09:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- nvmf/common.sh@46 -- # : 0 00:07:36.501 02:09:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:36.501 02:09:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:36.501 02:09:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:36.501 02:09:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.501 02:09:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.501 02:09:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:36.501 02:09:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:36.501 02:09:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:36.501 02:09:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:36.501 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:36.501 02:09:35 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:36.501 02:09:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:36.501 02:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.501 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.501 ************************************ 00:07:36.501 START TEST nvmf_example 00:07:36.501 ************************************ 00:07:36.501 02:09:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:36.501 * Looking for test storage... 00:07:36.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:36.501 02:09:35 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.501 02:09:35 -- nvmf/common.sh@7 -- # uname -s 00:07:36.501 02:09:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.501 02:09:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.501 02:09:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.501 02:09:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.501 02:09:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.501 02:09:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.501 02:09:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.501 02:09:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.501 02:09:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.501 02:09:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.501 02:09:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:36.501 02:09:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:36.501 02:09:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.501 02:09:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.501 02:09:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.501 02:09:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.501 02:09:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.501 02:09:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.501 02:09:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.501 02:09:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.501 02:09:35 -- paths/export.sh@5 -- # export PATH 00:07:36.502 02:09:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.502 02:09:35 -- nvmf/common.sh@46 -- # : 0 00:07:36.502 02:09:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:36.502 02:09:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:36.502 02:09:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:36.502 02:09:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.502 02:09:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.502 02:09:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:36.502 02:09:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:36.502 02:09:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:36.502 02:09:35 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:36.502 02:09:35 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:36.502 02:09:35 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:36.502 02:09:35 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:36.502 02:09:35 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:36.502 02:09:35 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:36.502 02:09:35 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:36.502 02:09:35 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:36.502 02:09:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:36.502 02:09:35 -- common/autotest_common.sh@10 -- # set +x 00:07:36.502 02:09:35 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:36.502 02:09:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:36.502 02:09:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.502 02:09:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:36.502 02:09:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:36.502 02:09:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:36.502 02:09:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.502 02:09:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.502 02:09:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.502 02:09:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:36.502 02:09:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:36.502 02:09:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:36.502 02:09:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:36.502 02:09:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:36.502 02:09:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:36.502 02:09:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.502 02:09:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.502 02:09:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:36.502 02:09:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:36.502 02:09:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:36.502 02:09:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:36.502 02:09:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:36.502 02:09:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.502 02:09:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:36.502 02:09:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:36.502 02:09:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:36.502 02:09:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:36.502 02:09:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:36.502 Cannot find device "nvmf_init_br" 00:07:36.502 02:09:35 -- nvmf/common.sh@153 -- # true 00:07:36.502 02:09:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:36.502 Cannot find device "nvmf_tgt_br" 00:07:36.502 02:09:36 -- nvmf/common.sh@154 -- # true 00:07:36.502 02:09:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.502 Cannot find device "nvmf_tgt_br2" 00:07:36.502 02:09:36 -- nvmf/common.sh@155 -- # true 00:07:36.502 02:09:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:36.502 Cannot find device "nvmf_init_br" 00:07:36.502 02:09:36 -- nvmf/common.sh@156 -- # true 00:07:36.502 02:09:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:36.502 Cannot find device "nvmf_tgt_br" 00:07:36.502 02:09:36 -- nvmf/common.sh@157 -- # true 00:07:36.502 02:09:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:36.502 Cannot find device "nvmf_tgt_br2" 00:07:36.502 02:09:36 -- nvmf/common.sh@158 -- # true 00:07:36.502 02:09:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:36.761 Cannot find device "nvmf_br" 00:07:36.761 02:09:36 -- nvmf/common.sh@159 -- # true 00:07:36.761 02:09:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:36.761 Cannot find device "nvmf_init_if" 00:07:36.761 02:09:36 -- nvmf/common.sh@160 -- # true 00:07:36.761 02:09:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.761 02:09:36 -- nvmf/common.sh@161 -- # true 00:07:36.761 02:09:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.761 02:09:36 -- nvmf/common.sh@162 -- # true 00:07:36.761 02:09:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:36.761 02:09:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:36.761 02:09:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:36.761 02:09:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:36.761 02:09:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:36.761 02:09:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:36.761 02:09:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:36.761 02:09:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:36.761 02:09:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:36.761 02:09:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:36.761 02:09:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:36.761 02:09:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:36.761 02:09:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:36.761 02:09:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:36.761 02:09:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:36.761 02:09:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:36.761 02:09:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:36.761 02:09:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:36.761 02:09:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:36.761 02:09:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:36.761 02:09:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:36.761 02:09:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:36.761 02:09:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:36.761 02:09:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:36.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:36.761 00:07:36.761 --- 10.0.0.2 ping statistics --- 00:07:36.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.761 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:36.761 02:09:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:36.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:36.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:07:36.761 00:07:36.761 --- 10.0.0.3 ping statistics --- 00:07:36.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.761 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:07:36.761 02:09:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:37.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:37.020 00:07:37.020 --- 10.0.0.1 ping statistics --- 00:07:37.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.020 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:37.020 02:09:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.020 02:09:36 -- nvmf/common.sh@421 -- # return 0 00:07:37.020 02:09:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:37.020 02:09:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.020 02:09:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:37.020 02:09:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:37.020 02:09:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.020 02:09:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:37.020 02:09:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:37.020 02:09:36 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:37.020 02:09:36 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:37.020 02:09:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:37.020 02:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.020 02:09:36 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:37.020 02:09:36 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:37.020 02:09:36 -- target/nvmf_example.sh@34 -- # nvmfpid=71557 00:07:37.020 02:09:36 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:37.020 02:09:36 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.020 02:09:36 -- target/nvmf_example.sh@36 -- # waitforlisten 71557 00:07:37.020 02:09:36 -- common/autotest_common.sh@819 -- # '[' -z 71557 ']' 00:07:37.020 02:09:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.020 02:09:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.020 02:09:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.020 02:09:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.020 02:09:36 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.956 02:09:37 -- common/autotest_common.sh@852 -- # return 0 00:07:37.956 02:09:37 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:37.956 02:09:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.956 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.956 02:09:37 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:37.956 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.956 02:09:37 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:37.956 02:09:37 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.956 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.956 02:09:37 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:37.956 02:09:37 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.956 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.956 02:09:37 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.956 02:09:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.956 02:09:37 -- common/autotest_common.sh@10 -- # set +x 00:07:37.956 02:09:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.956 02:09:37 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:37.956 02:09:37 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.157 Initializing NVMe Controllers 00:07:50.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.157 Initialization complete. Launching workers. 00:07:50.157 ======================================================== 00:07:50.157 Latency(us) 00:07:50.157 Device Information : IOPS MiB/s Average min max 00:07:50.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15676.60 61.24 4082.15 761.92 23074.32 00:07:50.157 ======================================================== 00:07:50.157 Total : 15676.60 61.24 4082.15 761.92 23074.32 00:07:50.157 00:07:50.157 02:09:47 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:50.157 02:09:47 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:50.157 02:09:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:50.157 02:09:47 -- nvmf/common.sh@116 -- # sync 00:07:50.157 02:09:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:50.157 02:09:47 -- nvmf/common.sh@119 -- # set +e 00:07:50.157 02:09:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:50.157 02:09:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:50.157 rmmod nvme_tcp 00:07:50.157 rmmod nvme_fabrics 00:07:50.157 rmmod nvme_keyring 00:07:50.157 02:09:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:50.157 02:09:47 -- nvmf/common.sh@123 -- # set -e 00:07:50.157 02:09:47 -- nvmf/common.sh@124 -- # return 0 00:07:50.157 02:09:47 -- nvmf/common.sh@477 -- # '[' -n 71557 ']' 00:07:50.157 02:09:47 -- nvmf/common.sh@478 -- # killprocess 71557 00:07:50.157 02:09:47 -- common/autotest_common.sh@926 -- # '[' -z 71557 ']' 00:07:50.157 02:09:47 -- common/autotest_common.sh@930 -- # kill -0 71557 00:07:50.157 02:09:47 -- common/autotest_common.sh@931 -- # uname 00:07:50.157 02:09:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:50.157 02:09:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71557 00:07:50.157 02:09:47 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:50.157 02:09:47 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:50.157 killing process with pid 71557 00:07:50.157 02:09:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71557' 00:07:50.157 02:09:47 -- common/autotest_common.sh@945 -- # kill 71557 00:07:50.157 02:09:47 -- common/autotest_common.sh@950 -- # wait 71557 00:07:50.157 nvmf threads initialize successfully 00:07:50.157 bdev subsystem init successfully 00:07:50.157 created a nvmf target service 00:07:50.157 create targets's poll groups done 00:07:50.157 all subsystems of target started 00:07:50.157 nvmf target is running 00:07:50.157 all subsystems of target stopped 00:07:50.157 destroy targets's poll groups done 00:07:50.157 destroyed the nvmf target service 00:07:50.157 bdev subsystem finish successfully 00:07:50.157 nvmf threads destroy successfully 00:07:50.157 02:09:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:50.157 02:09:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:50.157 02:09:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:50.157 02:09:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.157 02:09:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:50.157 02:09:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.157 02:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.157 02:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.157 02:09:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:50.157 02:09:48 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:50.157 02:09:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:50.157 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.157 00:07:50.157 real 0m12.244s 00:07:50.157 user 0m44.263s 00:07:50.157 sys 0m2.012s 00:07:50.157 02:09:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.157 ************************************ 00:07:50.157 END TEST nvmf_example 00:07:50.157 ************************************ 00:07:50.157 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.157 02:09:48 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.157 02:09:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:50.157 02:09:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.157 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.157 ************************************ 00:07:50.157 START TEST nvmf_filesystem 00:07:50.157 ************************************ 00:07:50.157 02:09:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:50.157 * Looking for test storage... 00:07:50.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.157 02:09:48 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:50.157 02:09:48 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.157 02:09:48 -- common/autotest_common.sh@34 -- # set -e 00:07:50.157 02:09:48 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.157 02:09:48 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.157 02:09:48 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:50.157 02:09:48 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:50.157 02:09:48 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.157 02:09:48 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.157 02:09:48 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.157 02:09:48 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.157 02:09:48 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:50.157 02:09:48 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.157 02:09:48 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.157 02:09:48 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.157 02:09:48 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.157 02:09:48 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.157 02:09:48 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.157 02:09:48 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.157 02:09:48 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.157 02:09:48 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.157 02:09:48 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.157 02:09:48 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.157 02:09:48 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.157 02:09:48 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.157 02:09:48 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:50.157 02:09:48 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.157 02:09:48 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.157 02:09:48 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.157 02:09:48 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.157 02:09:48 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.157 02:09:48 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.157 02:09:48 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.157 02:09:48 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.157 02:09:48 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.157 02:09:48 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.157 02:09:48 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.157 02:09:48 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.157 02:09:48 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.157 02:09:48 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.157 02:09:48 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.157 02:09:48 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.157 02:09:48 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:50.158 02:09:48 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.158 02:09:48 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.158 02:09:48 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.158 02:09:48 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.158 02:09:48 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:50.158 02:09:48 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.158 02:09:48 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.158 02:09:48 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.158 02:09:48 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.158 02:09:48 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:50.158 02:09:48 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:50.158 02:09:48 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.158 02:09:48 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:50.158 02:09:48 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:50.158 02:09:48 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:50.158 02:09:48 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:50.158 02:09:48 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:50.158 02:09:48 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:50.158 02:09:48 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.158 02:09:48 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:50.158 02:09:48 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.158 02:09:48 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:50.158 02:09:48 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:50.158 02:09:48 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:50.158 02:09:48 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.158 02:09:48 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:50.158 02:09:48 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:50.158 02:09:48 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:50.158 02:09:48 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:50.158 02:09:48 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.158 02:09:48 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:50.158 02:09:48 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:50.158 02:09:48 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:50.158 02:09:48 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:50.158 02:09:48 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:50.158 02:09:48 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:50.158 02:09:48 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.158 02:09:48 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:50.158 02:09:48 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:50.158 02:09:48 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:50.158 02:09:48 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.158 02:09:48 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:50.158 02:09:48 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:50.158 02:09:48 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:50.158 02:09:48 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:50.158 02:09:48 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:50.158 02:09:48 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:50.158 02:09:48 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:50.158 02:09:48 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:50.158 02:09:48 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:50.158 02:09:48 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:50.158 02:09:48 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.158 02:09:48 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.158 02:09:48 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.158 02:09:48 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.158 02:09:48 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.158 02:09:48 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.158 02:09:48 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:50.158 02:09:48 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.158 #define SPDK_CONFIG_H 00:07:50.158 #define SPDK_CONFIG_APPS 1 00:07:50.158 #define SPDK_CONFIG_ARCH native 00:07:50.158 #undef SPDK_CONFIG_ASAN 00:07:50.158 #define SPDK_CONFIG_AVAHI 1 00:07:50.158 #undef SPDK_CONFIG_CET 00:07:50.158 #define SPDK_CONFIG_COVERAGE 1 00:07:50.158 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.158 #undef SPDK_CONFIG_CRYPTO 00:07:50.158 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.158 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.158 #undef SPDK_CONFIG_DAOS 00:07:50.158 #define SPDK_CONFIG_DAOS_DIR 00:07:50.158 #define SPDK_CONFIG_DEBUG 1 00:07:50.158 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.158 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:07:50.158 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:07:50.158 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.158 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.158 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:50.158 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.158 #undef SPDK_CONFIG_FC 00:07:50.158 #define SPDK_CONFIG_FC_PATH 00:07:50.158 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.158 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.158 #undef SPDK_CONFIG_FUSE 00:07:50.158 #undef SPDK_CONFIG_FUZZER 00:07:50.158 #define SPDK_CONFIG_FUZZER_LIB 00:07:50.158 #define SPDK_CONFIG_GOLANG 1 00:07:50.158 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.158 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.158 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.158 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.158 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.158 #define SPDK_CONFIG_IDXD 1 00:07:50.158 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.158 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.158 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.158 #define SPDK_CONFIG_ISAL 1 00:07:50.158 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.158 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.158 #define SPDK_CONFIG_LIBDIR 00:07:50.158 #undef SPDK_CONFIG_LTO 00:07:50.158 #define SPDK_CONFIG_MAX_LCORES 00:07:50.158 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.158 #undef SPDK_CONFIG_OCF 00:07:50.158 #define SPDK_CONFIG_OCF_PATH 00:07:50.158 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.158 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.158 #undef SPDK_CONFIG_PGO_USE 00:07:50.158 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.158 #undef SPDK_CONFIG_RAID5F 00:07:50.158 #undef SPDK_CONFIG_RBD 00:07:50.158 #define SPDK_CONFIG_RDMA 1 00:07:50.158 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.158 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.158 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.158 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.158 #define SPDK_CONFIG_SHARED 1 00:07:50.158 #undef SPDK_CONFIG_SMA 00:07:50.158 #define SPDK_CONFIG_TESTS 1 00:07:50.158 #undef SPDK_CONFIG_TSAN 00:07:50.158 #define SPDK_CONFIG_UBLK 1 00:07:50.158 #define SPDK_CONFIG_UBSAN 1 00:07:50.158 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.158 #undef SPDK_CONFIG_URING 00:07:50.158 #define SPDK_CONFIG_URING_PATH 00:07:50.158 #undef SPDK_CONFIG_URING_ZNS 00:07:50.158 #define SPDK_CONFIG_USDT 1 00:07:50.158 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.158 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.158 #undef SPDK_CONFIG_VFIO_USER 00:07:50.158 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.158 #define SPDK_CONFIG_VHOST 1 00:07:50.158 #define SPDK_CONFIG_VIRTIO 1 00:07:50.158 #undef SPDK_CONFIG_VTUNE 00:07:50.158 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.158 #define SPDK_CONFIG_WERROR 1 00:07:50.158 #define SPDK_CONFIG_WPDK_DIR 00:07:50.158 #undef SPDK_CONFIG_XNVME 00:07:50.158 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.158 02:09:48 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.158 02:09:48 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.158 02:09:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.158 02:09:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.158 02:09:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.158 02:09:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.158 02:09:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.158 02:09:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.158 02:09:48 -- paths/export.sh@5 -- # export PATH 00:07:50.158 02:09:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.158 02:09:48 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:50.158 02:09:48 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:50.158 02:09:48 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:50.158 02:09:48 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:50.158 02:09:48 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:50.158 02:09:48 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:50.158 02:09:48 -- pm/common@16 -- # TEST_TAG=N/A 00:07:50.158 02:09:48 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:50.158 02:09:48 -- common/autotest_common.sh@52 -- # : 1 00:07:50.158 02:09:48 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:50.158 02:09:48 -- common/autotest_common.sh@56 -- # : 0 00:07:50.158 02:09:48 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.159 02:09:48 -- common/autotest_common.sh@58 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:50.159 02:09:48 -- common/autotest_common.sh@60 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.159 02:09:48 -- common/autotest_common.sh@62 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:50.159 02:09:48 -- common/autotest_common.sh@64 -- # : 00:07:50.159 02:09:48 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:50.159 02:09:48 -- common/autotest_common.sh@66 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.159 02:09:48 -- common/autotest_common.sh@68 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:50.159 02:09:48 -- common/autotest_common.sh@70 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:50.159 02:09:48 -- common/autotest_common.sh@72 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.159 02:09:48 -- common/autotest_common.sh@74 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:50.159 02:09:48 -- common/autotest_common.sh@76 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:50.159 02:09:48 -- common/autotest_common.sh@78 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:50.159 02:09:48 -- common/autotest_common.sh@80 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:50.159 02:09:48 -- common/autotest_common.sh@82 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:50.159 02:09:48 -- common/autotest_common.sh@84 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:50.159 02:09:48 -- common/autotest_common.sh@86 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:50.159 02:09:48 -- common/autotest_common.sh@88 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:50.159 02:09:48 -- common/autotest_common.sh@90 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.159 02:09:48 -- common/autotest_common.sh@92 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:50.159 02:09:48 -- common/autotest_common.sh@94 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.159 02:09:48 -- common/autotest_common.sh@96 -- # : tcp 00:07:50.159 02:09:48 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.159 02:09:48 -- common/autotest_common.sh@98 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:50.159 02:09:48 -- common/autotest_common.sh@100 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:50.159 02:09:48 -- common/autotest_common.sh@102 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:50.159 02:09:48 -- common/autotest_common.sh@104 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:50.159 02:09:48 -- common/autotest_common.sh@106 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:50.159 02:09:48 -- common/autotest_common.sh@108 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:50.159 02:09:48 -- common/autotest_common.sh@110 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:50.159 02:09:48 -- common/autotest_common.sh@112 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.159 02:09:48 -- common/autotest_common.sh@114 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:50.159 02:09:48 -- common/autotest_common.sh@116 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:50.159 02:09:48 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:07:50.159 02:09:48 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.159 02:09:48 -- common/autotest_common.sh@120 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:50.159 02:09:48 -- common/autotest_common.sh@122 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:50.159 02:09:48 -- common/autotest_common.sh@124 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:50.159 02:09:48 -- common/autotest_common.sh@126 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:50.159 02:09:48 -- common/autotest_common.sh@128 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:50.159 02:09:48 -- common/autotest_common.sh@130 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:50.159 02:09:48 -- common/autotest_common.sh@132 -- # : v22.11.4 00:07:50.159 02:09:48 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.159 02:09:48 -- common/autotest_common.sh@134 -- # : true 00:07:50.159 02:09:48 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:50.159 02:09:48 -- common/autotest_common.sh@136 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:50.159 02:09:48 -- common/autotest_common.sh@138 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:50.159 02:09:48 -- common/autotest_common.sh@140 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:50.159 02:09:48 -- common/autotest_common.sh@142 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.159 02:09:48 -- common/autotest_common.sh@144 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:50.159 02:09:48 -- common/autotest_common.sh@146 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:50.159 02:09:48 -- common/autotest_common.sh@148 -- # : 00:07:50.159 02:09:48 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:50.159 02:09:48 -- common/autotest_common.sh@150 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:50.159 02:09:48 -- common/autotest_common.sh@152 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:50.159 02:09:48 -- common/autotest_common.sh@154 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:50.159 02:09:48 -- common/autotest_common.sh@156 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.159 02:09:48 -- common/autotest_common.sh@158 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.159 02:09:48 -- common/autotest_common.sh@160 -- # : 0 00:07:50.159 02:09:48 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:50.159 02:09:48 -- common/autotest_common.sh@163 -- # : 00:07:50.159 02:09:48 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.159 02:09:48 -- common/autotest_common.sh@165 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.159 02:09:48 -- common/autotest_common.sh@167 -- # : 1 00:07:50.159 02:09:48 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.159 02:09:48 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:50.159 02:09:48 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.159 02:09:48 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.159 02:09:48 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:50.159 02:09:48 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:50.159 02:09:48 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.159 02:09:48 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.159 02:09:48 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.159 02:09:48 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.159 02:09:48 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.159 02:09:48 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.159 02:09:48 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.159 02:09:48 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.159 02:09:48 -- common/autotest_common.sh@196 -- # cat 00:07:50.159 02:09:48 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:50.159 02:09:48 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.159 02:09:48 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.159 02:09:48 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.159 02:09:48 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.159 02:09:48 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.160 02:09:48 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:50.160 02:09:48 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:50.160 02:09:48 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:50.160 02:09:48 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:50.160 02:09:48 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:50.160 02:09:48 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.160 02:09:48 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.160 02:09:48 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.160 02:09:48 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.160 02:09:48 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:50.160 02:09:48 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:50.160 02:09:48 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.160 02:09:48 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.160 02:09:48 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:50.160 02:09:48 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:50.160 02:09:48 -- common/autotest_common.sh@249 -- # valgrind= 00:07:50.160 02:09:48 -- common/autotest_common.sh@255 -- # uname -s 00:07:50.160 02:09:48 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:50.160 02:09:48 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:50.160 02:09:48 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:50.160 02:09:48 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:50.160 02:09:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:50.160 02:09:48 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:07:50.160 02:09:48 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:50.160 02:09:48 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:50.160 02:09:48 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:50.160 02:09:48 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:50.160 02:09:48 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:50.160 02:09:48 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:50.160 02:09:48 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:50.160 02:09:48 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:50.160 02:09:48 -- common/autotest_common.sh@309 -- # [[ -z 71802 ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@309 -- # kill -0 71802 00:07:50.160 02:09:48 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:50.160 02:09:48 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:50.160 02:09:48 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:50.160 02:09:48 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:50.160 02:09:48 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:50.160 02:09:48 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:50.160 02:09:48 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:50.160 02:09:48 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.Yt2xKI 00:07:50.160 02:09:48 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.160 02:09:48 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Yt2xKI/tests/target /tmp/spdk.Yt2xKI 00:07:50.160 02:09:48 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@318 -- # df -T 00:07:50.160 02:09:48 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=13206777856 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=5837754368 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=13206777856 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=5837754368 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267756544 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267895808 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:50.160 02:09:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=94401658880 00:07:50.160 02:09:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:07:50.160 02:09:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=5301121024 00:07:50.160 02:09:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:50.160 02:09:48 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:50.160 * Looking for test storage... 00:07:50.160 02:09:48 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:50.160 02:09:48 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.160 02:09:48 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.160 02:09:48 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.160 02:09:48 -- common/autotest_common.sh@363 -- # mount=/home 00:07:50.160 02:09:48 -- common/autotest_common.sh@365 -- # target_space=13206777856 00:07:50.160 02:09:48 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.160 02:09:48 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:50.160 02:09:48 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.160 02:09:48 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.160 02:09:48 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.160 02:09:48 -- common/autotest_common.sh@380 -- # return 0 00:07:50.160 02:09:48 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:50.160 02:09:48 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:50.160 02:09:48 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.160 02:09:48 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.160 02:09:48 -- common/autotest_common.sh@1672 -- # true 00:07:50.160 02:09:48 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:50.160 02:09:48 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.160 02:09:48 -- common/autotest_common.sh@27 -- # exec 00:07:50.160 02:09:48 -- common/autotest_common.sh@29 -- # exec 00:07:50.161 02:09:48 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.161 02:09:48 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.161 02:09:48 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.161 02:09:48 -- common/autotest_common.sh@18 -- # set -x 00:07:50.161 02:09:48 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.161 02:09:48 -- nvmf/common.sh@7 -- # uname -s 00:07:50.161 02:09:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.161 02:09:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.161 02:09:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.161 02:09:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.161 02:09:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.161 02:09:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.161 02:09:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.161 02:09:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.161 02:09:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.161 02:09:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:50.161 02:09:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:07:50.161 02:09:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.161 02:09:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.161 02:09:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.161 02:09:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.161 02:09:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.161 02:09:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.161 02:09:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.161 02:09:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.161 02:09:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.161 02:09:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.161 02:09:48 -- paths/export.sh@5 -- # export PATH 00:07:50.161 02:09:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.161 02:09:48 -- nvmf/common.sh@46 -- # : 0 00:07:50.161 02:09:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:50.161 02:09:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:50.161 02:09:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:50.161 02:09:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.161 02:09:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.161 02:09:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:50.161 02:09:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:50.161 02:09:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:50.161 02:09:48 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:50.161 02:09:48 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:50.161 02:09:48 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:50.161 02:09:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:50.161 02:09:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.161 02:09:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:50.161 02:09:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:50.161 02:09:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:50.161 02:09:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.161 02:09:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.161 02:09:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.161 02:09:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:50.161 02:09:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:50.161 02:09:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.161 02:09:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.161 02:09:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.161 02:09:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:50.161 02:09:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.161 02:09:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.161 02:09:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.161 02:09:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.161 02:09:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.161 02:09:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.161 02:09:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.161 02:09:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.161 02:09:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:50.161 02:09:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:50.161 Cannot find device "nvmf_tgt_br" 00:07:50.161 02:09:48 -- nvmf/common.sh@154 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.161 Cannot find device "nvmf_tgt_br2" 00:07:50.161 02:09:48 -- nvmf/common.sh@155 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:50.161 02:09:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:50.161 Cannot find device "nvmf_tgt_br" 00:07:50.161 02:09:48 -- nvmf/common.sh@157 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:50.161 Cannot find device "nvmf_tgt_br2" 00:07:50.161 02:09:48 -- nvmf/common.sh@158 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:50.161 02:09:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:50.161 02:09:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.161 02:09:48 -- nvmf/common.sh@161 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.161 02:09:48 -- nvmf/common.sh@162 -- # true 00:07:50.161 02:09:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.161 02:09:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.161 02:09:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.161 02:09:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.161 02:09:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.161 02:09:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.161 02:09:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.161 02:09:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.161 02:09:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.162 02:09:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:50.162 02:09:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:50.162 02:09:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:50.162 02:09:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:50.162 02:09:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.162 02:09:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.162 02:09:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.162 02:09:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:50.162 02:09:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:50.162 02:09:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.162 02:09:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.162 02:09:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.162 02:09:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.162 02:09:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.162 02:09:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:50.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:07:50.162 00:07:50.162 --- 10.0.0.2 ping statistics --- 00:07:50.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.162 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:50.162 02:09:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:50.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:07:50.162 00:07:50.162 --- 10.0.0.3 ping statistics --- 00:07:50.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.162 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:50.162 02:09:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:50.162 00:07:50.162 --- 10.0.0.1 ping statistics --- 00:07:50.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.162 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:50.162 02:09:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.162 02:09:48 -- nvmf/common.sh@421 -- # return 0 00:07:50.162 02:09:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:50.162 02:09:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.162 02:09:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:50.162 02:09:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:50.162 02:09:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.162 02:09:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:50.162 02:09:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:50.162 02:09:48 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:50.162 02:09:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:50.162 02:09:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.162 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.162 ************************************ 00:07:50.162 START TEST nvmf_filesystem_no_in_capsule 00:07:50.162 ************************************ 00:07:50.162 02:09:48 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:50.162 02:09:48 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:50.162 02:09:48 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.162 02:09:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:50.162 02:09:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:50.162 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.162 02:09:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.162 02:09:48 -- nvmf/common.sh@469 -- # nvmfpid=71966 00:07:50.162 02:09:48 -- nvmf/common.sh@470 -- # waitforlisten 71966 00:07:50.162 02:09:48 -- common/autotest_common.sh@819 -- # '[' -z 71966 ']' 00:07:50.162 02:09:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.162 02:09:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.162 02:09:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.162 02:09:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.162 02:09:48 -- common/autotest_common.sh@10 -- # set +x 00:07:50.162 [2024-07-15 02:09:48.823674] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:50.162 [2024-07-15 02:09:48.823778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.162 [2024-07-15 02:09:48.962102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.162 [2024-07-15 02:09:49.056197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.162 [2024-07-15 02:09:49.056349] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.162 [2024-07-15 02:09:49.056363] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.162 [2024-07-15 02:09:49.056372] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.162 [2024-07-15 02:09:49.056548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.162 [2024-07-15 02:09:49.056638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.162 [2024-07-15 02:09:49.057248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.162 [2024-07-15 02:09:49.057297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.420 02:09:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.420 02:09:49 -- common/autotest_common.sh@852 -- # return 0 00:07:50.420 02:09:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:50.420 02:09:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:50.420 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 02:09:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.420 02:09:49 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:50.420 02:09:49 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:50.420 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.420 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:50.420 [2024-07-15 02:09:49.844921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.420 02:09:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.420 02:09:49 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:50.420 02:09:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.420 02:09:49 -- common/autotest_common.sh@10 -- # set +x 00:07:50.678 Malloc1 00:07:50.678 02:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.678 02:09:50 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.678 02:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.678 02:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:50.678 02:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.678 02:09:50 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:50.678 02:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.678 02:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:50.678 02:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.678 02:09:50 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.678 02:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.678 02:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:50.678 [2024-07-15 02:09:50.034300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.678 02:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.678 02:09:50 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:50.678 02:09:50 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:50.678 02:09:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:50.678 02:09:50 -- common/autotest_common.sh@1359 -- # local bs 00:07:50.678 02:09:50 -- common/autotest_common.sh@1360 -- # local nb 00:07:50.678 02:09:50 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:50.678 02:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.678 02:09:50 -- common/autotest_common.sh@10 -- # set +x 00:07:50.678 02:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.678 02:09:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:50.678 { 00:07:50.678 "aliases": [ 00:07:50.679 "09ad6ade-3138-4fa4-b77b-b3bced625890" 00:07:50.679 ], 00:07:50.679 "assigned_rate_limits": { 00:07:50.679 "r_mbytes_per_sec": 0, 00:07:50.679 "rw_ios_per_sec": 0, 00:07:50.679 "rw_mbytes_per_sec": 0, 00:07:50.679 "w_mbytes_per_sec": 0 00:07:50.679 }, 00:07:50.679 "block_size": 512, 00:07:50.679 "claim_type": "exclusive_write", 00:07:50.679 "claimed": true, 00:07:50.679 "driver_specific": {}, 00:07:50.679 "memory_domains": [ 00:07:50.679 { 00:07:50.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.679 "dma_device_type": 2 00:07:50.679 } 00:07:50.679 ], 00:07:50.679 "name": "Malloc1", 00:07:50.679 "num_blocks": 1048576, 00:07:50.679 "product_name": "Malloc disk", 00:07:50.679 "supported_io_types": { 00:07:50.679 "abort": true, 00:07:50.679 "compare": false, 00:07:50.679 "compare_and_write": false, 00:07:50.679 "flush": true, 00:07:50.679 "nvme_admin": false, 00:07:50.679 "nvme_io": false, 00:07:50.679 "read": true, 00:07:50.679 "reset": true, 00:07:50.679 "unmap": true, 00:07:50.679 "write": true, 00:07:50.679 "write_zeroes": true 00:07:50.679 }, 00:07:50.679 "uuid": "09ad6ade-3138-4fa4-b77b-b3bced625890", 00:07:50.679 "zoned": false 00:07:50.679 } 00:07:50.679 ]' 00:07:50.679 02:09:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:50.679 02:09:50 -- common/autotest_common.sh@1362 -- # bs=512 00:07:50.679 02:09:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:50.679 02:09:50 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:50.679 02:09:50 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:50.679 02:09:50 -- common/autotest_common.sh@1367 -- # echo 512 00:07:50.679 02:09:50 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:50.679 02:09:50 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.960 02:09:50 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.960 02:09:50 -- common/autotest_common.sh@1177 -- # local i=0 00:07:50.960 02:09:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.960 02:09:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:50.960 02:09:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:52.859 02:09:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:52.859 02:09:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:52.859 02:09:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.859 02:09:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:52.859 02:09:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.859 02:09:52 -- common/autotest_common.sh@1187 -- # return 0 00:07:52.859 02:09:52 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.859 02:09:52 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.859 02:09:52 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.859 02:09:52 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.859 02:09:52 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.859 02:09:52 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.859 02:09:52 -- setup/common.sh@80 -- # echo 536870912 00:07:52.859 02:09:52 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.859 02:09:52 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.859 02:09:52 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.859 02:09:52 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.859 02:09:52 -- target/filesystem.sh@69 -- # partprobe 00:07:53.117 02:09:52 -- target/filesystem.sh@70 -- # sleep 1 00:07:54.050 02:09:53 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:54.050 02:09:53 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:54.050 02:09:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.050 02:09:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.050 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:07:54.050 ************************************ 00:07:54.050 START TEST filesystem_ext4 00:07:54.050 ************************************ 00:07:54.050 02:09:53 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:54.050 02:09:53 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:54.050 02:09:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.050 02:09:53 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:54.050 02:09:53 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:54.050 02:09:53 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.050 02:09:53 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.050 02:09:53 -- common/autotest_common.sh@905 -- # local force 00:07:54.050 02:09:53 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:54.050 02:09:53 -- common/autotest_common.sh@908 -- # force=-F 00:07:54.050 02:09:53 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:54.050 mke2fs 1.46.5 (30-Dec-2021) 00:07:54.309 Discarding device blocks: 0/522240 done 00:07:54.309 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:54.309 Filesystem UUID: 9ccfac3a-5132-49b8-af6e-c6264a346f06 00:07:54.309 Superblock backups stored on blocks: 00:07:54.309 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:54.309 00:07:54.309 Allocating group tables: 0/64 done 00:07:54.309 Writing inode tables: 0/64 done 00:07:54.309 Creating journal (8192 blocks): done 00:07:54.309 Writing superblocks and filesystem accounting information: 0/64 done 00:07:54.309 00:07:54.309 02:09:53 -- common/autotest_common.sh@921 -- # return 0 00:07:54.309 02:09:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.309 02:09:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.309 02:09:53 -- target/filesystem.sh@25 -- # sync 00:07:54.309 02:09:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.309 02:09:53 -- target/filesystem.sh@27 -- # sync 00:07:54.566 02:09:53 -- target/filesystem.sh@29 -- # i=0 00:07:54.566 02:09:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.566 02:09:53 -- target/filesystem.sh@37 -- # kill -0 71966 00:07:54.566 02:09:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.567 02:09:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.567 02:09:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.567 02:09:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.567 00:07:54.567 real 0m0.398s 00:07:54.567 user 0m0.025s 00:07:54.567 sys 0m0.054s 00:07:54.567 02:09:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.567 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:07:54.567 ************************************ 00:07:54.567 END TEST filesystem_ext4 00:07:54.567 ************************************ 00:07:54.567 02:09:53 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.567 02:09:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.567 02:09:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.567 02:09:53 -- common/autotest_common.sh@10 -- # set +x 00:07:54.567 ************************************ 00:07:54.567 START TEST filesystem_btrfs 00:07:54.567 ************************************ 00:07:54.567 02:09:53 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.567 02:09:53 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.567 02:09:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.567 02:09:53 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.567 02:09:53 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:54.567 02:09:53 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.567 02:09:53 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.567 02:09:53 -- common/autotest_common.sh@905 -- # local force 00:07:54.567 02:09:53 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:54.567 02:09:53 -- common/autotest_common.sh@910 -- # force=-f 00:07:54.567 02:09:53 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.567 btrfs-progs v6.6.2 00:07:54.567 See https://btrfs.readthedocs.io for more information. 00:07:54.567 00:07:54.567 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.567 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.567 this does not affect your deployments: 00:07:54.567 - DUP for metadata (-m dup) 00:07:54.567 - enabled no-holes (-O no-holes) 00:07:54.567 - enabled free-space-tree (-R free-space-tree) 00:07:54.567 00:07:54.567 Label: (null) 00:07:54.567 UUID: 19daddf4-b1e6-4cf5-b678-2b9c24b955c4 00:07:54.567 Node size: 16384 00:07:54.567 Sector size: 4096 00:07:54.567 Filesystem size: 510.00MiB 00:07:54.567 Block group profiles: 00:07:54.567 Data: single 8.00MiB 00:07:54.567 Metadata: DUP 32.00MiB 00:07:54.567 System: DUP 8.00MiB 00:07:54.567 SSD detected: yes 00:07:54.567 Zoned device: no 00:07:54.567 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.567 Runtime features: free-space-tree 00:07:54.567 Checksum: crc32c 00:07:54.567 Number of devices: 1 00:07:54.567 Devices: 00:07:54.567 ID SIZE PATH 00:07:54.567 1 510.00MiB /dev/nvme0n1p1 00:07:54.567 00:07:54.825 02:09:54 -- common/autotest_common.sh@921 -- # return 0 00:07:54.825 02:09:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.825 02:09:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.825 02:09:54 -- target/filesystem.sh@25 -- # sync 00:07:54.825 02:09:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.825 02:09:54 -- target/filesystem.sh@27 -- # sync 00:07:54.825 02:09:54 -- target/filesystem.sh@29 -- # i=0 00:07:54.825 02:09:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.825 02:09:54 -- target/filesystem.sh@37 -- # kill -0 71966 00:07:54.825 02:09:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.825 02:09:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.825 02:09:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.825 02:09:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.825 00:07:54.826 real 0m0.270s 00:07:54.826 user 0m0.019s 00:07:54.826 sys 0m0.065s 00:07:54.826 02:09:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.826 02:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.826 ************************************ 00:07:54.826 END TEST filesystem_btrfs 00:07:54.826 ************************************ 00:07:54.826 02:09:54 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:54.826 02:09:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:54.826 02:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.826 02:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.826 ************************************ 00:07:54.826 START TEST filesystem_xfs 00:07:54.826 ************************************ 00:07:54.826 02:09:54 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.826 02:09:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.826 02:09:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.826 02:09:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.826 02:09:54 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:54.826 02:09:54 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:54.826 02:09:54 -- common/autotest_common.sh@904 -- # local i=0 00:07:54.826 02:09:54 -- common/autotest_common.sh@905 -- # local force 00:07:54.826 02:09:54 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:54.826 02:09:54 -- common/autotest_common.sh@910 -- # force=-f 00:07:54.826 02:09:54 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.826 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.826 = sectsz=512 attr=2, projid32bit=1 00:07:54.826 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.826 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.826 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.826 = sunit=0 swidth=0 blks 00:07:54.826 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.826 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.826 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.826 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.761 Discarding blocks...Done. 00:07:55.761 02:09:55 -- common/autotest_common.sh@921 -- # return 0 00:07:55.761 02:09:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.291 02:09:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.291 02:09:57 -- target/filesystem.sh@25 -- # sync 00:07:58.291 02:09:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.291 02:09:57 -- target/filesystem.sh@27 -- # sync 00:07:58.291 02:09:57 -- target/filesystem.sh@29 -- # i=0 00:07:58.291 02:09:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.291 02:09:57 -- target/filesystem.sh@37 -- # kill -0 71966 00:07:58.291 02:09:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.291 02:09:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.291 02:09:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.291 02:09:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.291 00:07:58.291 real 0m3.112s 00:07:58.291 user 0m0.025s 00:07:58.291 sys 0m0.055s 00:07:58.291 02:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.291 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.291 ************************************ 00:07:58.291 END TEST filesystem_xfs 00:07:58.291 ************************************ 00:07:58.291 02:09:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:58.291 02:09:57 -- target/filesystem.sh@93 -- # sync 00:07:58.291 02:09:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.291 02:09:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.291 02:09:57 -- common/autotest_common.sh@1198 -- # local i=0 00:07:58.291 02:09:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:58.291 02:09:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.291 02:09:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:58.291 02:09:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.291 02:09:57 -- common/autotest_common.sh@1210 -- # return 0 00:07:58.292 02:09:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.292 02:09:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.292 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.292 02:09:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.292 02:09:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:58.292 02:09:57 -- target/filesystem.sh@101 -- # killprocess 71966 00:07:58.292 02:09:57 -- common/autotest_common.sh@926 -- # '[' -z 71966 ']' 00:07:58.292 02:09:57 -- common/autotest_common.sh@930 -- # kill -0 71966 00:07:58.292 02:09:57 -- common/autotest_common.sh@931 -- # uname 00:07:58.292 02:09:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.292 02:09:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71966 00:07:58.292 02:09:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.292 02:09:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.292 killing process with pid 71966 00:07:58.292 02:09:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71966' 00:07:58.292 02:09:57 -- common/autotest_common.sh@945 -- # kill 71966 00:07:58.292 02:09:57 -- common/autotest_common.sh@950 -- # wait 71966 00:07:58.550 02:09:57 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:58.550 00:07:58.550 real 0m9.165s 00:07:58.550 user 0m34.696s 00:07:58.550 sys 0m1.606s 00:07:58.550 02:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.550 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.550 ************************************ 00:07:58.550 END TEST nvmf_filesystem_no_in_capsule 00:07:58.550 ************************************ 00:07:58.550 02:09:57 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:58.550 02:09:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.550 02:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.550 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.550 ************************************ 00:07:58.550 START TEST nvmf_filesystem_in_capsule 00:07:58.550 ************************************ 00:07:58.550 02:09:57 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:58.550 02:09:57 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:58.550 02:09:57 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:58.550 02:09:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:58.550 02:09:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.550 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.550 02:09:57 -- nvmf/common.sh@469 -- # nvmfpid=72279 00:07:58.550 02:09:57 -- nvmf/common.sh@470 -- # waitforlisten 72279 00:07:58.550 02:09:57 -- common/autotest_common.sh@819 -- # '[' -z 72279 ']' 00:07:58.550 02:09:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.550 02:09:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.550 02:09:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.550 02:09:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.550 02:09:57 -- common/autotest_common.sh@10 -- # set +x 00:07:58.550 02:09:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.550 [2024-07-15 02:09:58.042086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:58.550 [2024-07-15 02:09:58.042197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.808 [2024-07-15 02:09:58.176780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.808 [2024-07-15 02:09:58.267219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.808 [2024-07-15 02:09:58.267368] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.808 [2024-07-15 02:09:58.267382] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.808 [2024-07-15 02:09:58.267391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.808 [2024-07-15 02:09:58.267515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.808 [2024-07-15 02:09:58.267651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.808 [2024-07-15 02:09:58.268359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.808 [2024-07-15 02:09:58.268405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.742 02:09:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.742 02:09:59 -- common/autotest_common.sh@852 -- # return 0 00:07:59.742 02:09:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:59.742 02:09:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 02:09:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.742 02:09:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:59.742 02:09:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 [2024-07-15 02:09:59.047422] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 [2024-07-15 02:09:59.237284] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:59.742 02:09:59 -- common/autotest_common.sh@1359 -- # local bs 00:07:59.742 02:09:59 -- common/autotest_common.sh@1360 -- # local nb 00:07:59.742 02:09:59 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:59.742 02:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.742 02:09:59 -- common/autotest_common.sh@10 -- # set +x 00:07:59.742 02:09:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.742 02:09:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:59.742 { 00:07:59.742 "aliases": [ 00:07:59.742 "da8e1e5a-2bb0-4314-8614-778ceed2f45e" 00:07:59.742 ], 00:07:59.742 "assigned_rate_limits": { 00:07:59.742 "r_mbytes_per_sec": 0, 00:07:59.742 "rw_ios_per_sec": 0, 00:07:59.743 "rw_mbytes_per_sec": 0, 00:07:59.743 "w_mbytes_per_sec": 0 00:07:59.743 }, 00:07:59.743 "block_size": 512, 00:07:59.743 "claim_type": "exclusive_write", 00:07:59.743 "claimed": true, 00:07:59.743 "driver_specific": {}, 00:07:59.743 "memory_domains": [ 00:07:59.743 { 00:07:59.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.743 "dma_device_type": 2 00:07:59.743 } 00:07:59.743 ], 00:07:59.743 "name": "Malloc1", 00:07:59.743 "num_blocks": 1048576, 00:07:59.743 "product_name": "Malloc disk", 00:07:59.743 "supported_io_types": { 00:07:59.743 "abort": true, 00:07:59.743 "compare": false, 00:07:59.743 "compare_and_write": false, 00:07:59.743 "flush": true, 00:07:59.743 "nvme_admin": false, 00:07:59.743 "nvme_io": false, 00:07:59.743 "read": true, 00:07:59.743 "reset": true, 00:07:59.743 "unmap": true, 00:07:59.743 "write": true, 00:07:59.743 "write_zeroes": true 00:07:59.743 }, 00:07:59.743 "uuid": "da8e1e5a-2bb0-4314-8614-778ceed2f45e", 00:07:59.743 "zoned": false 00:07:59.743 } 00:07:59.743 ]' 00:07:59.743 02:09:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:00.001 02:09:59 -- common/autotest_common.sh@1362 -- # bs=512 00:08:00.001 02:09:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:00.001 02:09:59 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:00.001 02:09:59 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:00.001 02:09:59 -- common/autotest_common.sh@1367 -- # echo 512 00:08:00.001 02:09:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.001 02:09:59 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.001 02:09:59 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.001 02:09:59 -- common/autotest_common.sh@1177 -- # local i=0 00:08:00.001 02:09:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.001 02:09:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:00.001 02:09:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:02.556 02:10:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:02.556 02:10:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:02.556 02:10:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.556 02:10:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:02.556 02:10:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.556 02:10:01 -- common/autotest_common.sh@1187 -- # return 0 00:08:02.556 02:10:01 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:02.556 02:10:01 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:02.556 02:10:01 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:02.556 02:10:01 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:02.556 02:10:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:02.556 02:10:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:02.556 02:10:01 -- setup/common.sh@80 -- # echo 536870912 00:08:02.556 02:10:01 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:02.556 02:10:01 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:02.556 02:10:01 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:02.556 02:10:01 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:02.556 02:10:01 -- target/filesystem.sh@69 -- # partprobe 00:08:02.556 02:10:01 -- target/filesystem.sh@70 -- # sleep 1 00:08:03.124 02:10:02 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:03.124 02:10:02 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:03.124 02:10:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:03.124 02:10:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.124 02:10:02 -- common/autotest_common.sh@10 -- # set +x 00:08:03.124 ************************************ 00:08:03.124 START TEST filesystem_in_capsule_ext4 00:08:03.124 ************************************ 00:08:03.383 02:10:02 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:03.383 02:10:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:03.383 02:10:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.383 02:10:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:03.383 02:10:02 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:03.383 02:10:02 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:03.383 02:10:02 -- common/autotest_common.sh@904 -- # local i=0 00:08:03.383 02:10:02 -- common/autotest_common.sh@905 -- # local force 00:08:03.383 02:10:02 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:03.383 02:10:02 -- common/autotest_common.sh@908 -- # force=-F 00:08:03.383 02:10:02 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:03.383 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.383 Discarding device blocks: 0/522240 done 00:08:03.383 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:03.383 Filesystem UUID: 018c9ef4-7022-47cb-9f46-918906cd8b94 00:08:03.383 Superblock backups stored on blocks: 00:08:03.383 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:03.383 00:08:03.383 Allocating group tables: 0/64 done 00:08:03.383 Writing inode tables: 0/64 done 00:08:03.383 Creating journal (8192 blocks): done 00:08:03.383 Writing superblocks and filesystem accounting information: 0/64 done 00:08:03.383 00:08:03.383 02:10:02 -- common/autotest_common.sh@921 -- # return 0 00:08:03.383 02:10:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.383 02:10:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.642 02:10:02 -- target/filesystem.sh@25 -- # sync 00:08:03.642 02:10:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.642 02:10:03 -- target/filesystem.sh@27 -- # sync 00:08:03.642 02:10:03 -- target/filesystem.sh@29 -- # i=0 00:08:03.642 02:10:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.642 02:10:03 -- target/filesystem.sh@37 -- # kill -0 72279 00:08:03.642 02:10:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.642 02:10:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.642 02:10:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.642 02:10:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.642 ************************************ 00:08:03.642 END TEST filesystem_in_capsule_ext4 00:08:03.642 ************************************ 00:08:03.642 00:08:03.642 real 0m0.403s 00:08:03.642 user 0m0.023s 00:08:03.642 sys 0m0.061s 00:08:03.642 02:10:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.642 02:10:03 -- common/autotest_common.sh@10 -- # set +x 00:08:03.642 02:10:03 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.642 02:10:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:03.642 02:10:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.642 02:10:03 -- common/autotest_common.sh@10 -- # set +x 00:08:03.642 ************************************ 00:08:03.642 START TEST filesystem_in_capsule_btrfs 00:08:03.642 ************************************ 00:08:03.642 02:10:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.642 02:10:03 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.642 02:10:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.642 02:10:03 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.642 02:10:03 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:03.642 02:10:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:03.642 02:10:03 -- common/autotest_common.sh@904 -- # local i=0 00:08:03.642 02:10:03 -- common/autotest_common.sh@905 -- # local force 00:08:03.642 02:10:03 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:03.642 02:10:03 -- common/autotest_common.sh@910 -- # force=-f 00:08:03.642 02:10:03 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.899 btrfs-progs v6.6.2 00:08:03.899 See https://btrfs.readthedocs.io for more information. 00:08:03.899 00:08:03.899 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.899 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.899 this does not affect your deployments: 00:08:03.899 - DUP for metadata (-m dup) 00:08:03.899 - enabled no-holes (-O no-holes) 00:08:03.899 - enabled free-space-tree (-R free-space-tree) 00:08:03.899 00:08:03.899 Label: (null) 00:08:03.899 UUID: 6dde0b15-eb79-42ff-82fa-c814b20c1c15 00:08:03.899 Node size: 16384 00:08:03.899 Sector size: 4096 00:08:03.899 Filesystem size: 510.00MiB 00:08:03.899 Block group profiles: 00:08:03.899 Data: single 8.00MiB 00:08:03.899 Metadata: DUP 32.00MiB 00:08:03.899 System: DUP 8.00MiB 00:08:03.899 SSD detected: yes 00:08:03.899 Zoned device: no 00:08:03.899 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.899 Runtime features: free-space-tree 00:08:03.899 Checksum: crc32c 00:08:03.899 Number of devices: 1 00:08:03.899 Devices: 00:08:03.899 ID SIZE PATH 00:08:03.899 1 510.00MiB /dev/nvme0n1p1 00:08:03.899 00:08:03.899 02:10:03 -- common/autotest_common.sh@921 -- # return 0 00:08:03.899 02:10:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.899 02:10:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.899 02:10:03 -- target/filesystem.sh@25 -- # sync 00:08:03.899 02:10:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.899 02:10:03 -- target/filesystem.sh@27 -- # sync 00:08:03.899 02:10:03 -- target/filesystem.sh@29 -- # i=0 00:08:03.899 02:10:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.899 02:10:03 -- target/filesystem.sh@37 -- # kill -0 72279 00:08:03.899 02:10:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.899 02:10:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.899 02:10:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.899 02:10:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.899 ************************************ 00:08:03.899 END TEST filesystem_in_capsule_btrfs 00:08:03.899 ************************************ 00:08:03.899 00:08:03.899 real 0m0.216s 00:08:03.899 user 0m0.022s 00:08:03.899 sys 0m0.057s 00:08:03.899 02:10:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.899 02:10:03 -- common/autotest_common.sh@10 -- # set +x 00:08:03.899 02:10:03 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:03.899 02:10:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:03.899 02:10:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.899 02:10:03 -- common/autotest_common.sh@10 -- # set +x 00:08:03.899 ************************************ 00:08:03.899 START TEST filesystem_in_capsule_xfs 00:08:03.899 ************************************ 00:08:03.899 02:10:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:03.899 02:10:03 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:03.899 02:10:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.899 02:10:03 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:03.899 02:10:03 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:03.899 02:10:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:03.899 02:10:03 -- common/autotest_common.sh@904 -- # local i=0 00:08:03.899 02:10:03 -- common/autotest_common.sh@905 -- # local force 00:08:03.899 02:10:03 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:03.899 02:10:03 -- common/autotest_common.sh@910 -- # force=-f 00:08:03.899 02:10:03 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.156 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.156 = sectsz=512 attr=2, projid32bit=1 00:08:04.156 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.156 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.156 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.156 = sunit=0 swidth=0 blks 00:08:04.156 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.156 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.156 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.157 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:04.722 Discarding blocks...Done. 00:08:04.722 02:10:04 -- common/autotest_common.sh@921 -- # return 0 00:08:04.722 02:10:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.622 02:10:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.622 02:10:05 -- target/filesystem.sh@25 -- # sync 00:08:06.622 02:10:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.622 02:10:05 -- target/filesystem.sh@27 -- # sync 00:08:06.622 02:10:05 -- target/filesystem.sh@29 -- # i=0 00:08:06.622 02:10:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.622 02:10:05 -- target/filesystem.sh@37 -- # kill -0 72279 00:08:06.622 02:10:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.622 02:10:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.622 02:10:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.622 02:10:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.622 ************************************ 00:08:06.622 END TEST filesystem_in_capsule_xfs 00:08:06.622 ************************************ 00:08:06.622 00:08:06.622 real 0m2.597s 00:08:06.622 user 0m0.021s 00:08:06.622 sys 0m0.055s 00:08:06.622 02:10:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.622 02:10:05 -- common/autotest_common.sh@10 -- # set +x 00:08:06.622 02:10:06 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:06.622 02:10:06 -- target/filesystem.sh@93 -- # sync 00:08:06.622 02:10:06 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:06.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.622 02:10:06 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:06.622 02:10:06 -- common/autotest_common.sh@1198 -- # local i=0 00:08:06.622 02:10:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:06.622 02:10:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.622 02:10:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:06.622 02:10:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:06.622 02:10:06 -- common/autotest_common.sh@1210 -- # return 0 00:08:06.622 02:10:06 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.622 02:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.622 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.622 02:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.622 02:10:06 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:06.622 02:10:06 -- target/filesystem.sh@101 -- # killprocess 72279 00:08:06.622 02:10:06 -- common/autotest_common.sh@926 -- # '[' -z 72279 ']' 00:08:06.622 02:10:06 -- common/autotest_common.sh@930 -- # kill -0 72279 00:08:06.622 02:10:06 -- common/autotest_common.sh@931 -- # uname 00:08:06.622 02:10:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:06.622 02:10:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72279 00:08:06.880 killing process with pid 72279 00:08:06.880 02:10:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:06.880 02:10:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:06.880 02:10:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72279' 00:08:06.880 02:10:06 -- common/autotest_common.sh@945 -- # kill 72279 00:08:06.880 02:10:06 -- common/autotest_common.sh@950 -- # wait 72279 00:08:07.138 02:10:06 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:07.138 00:08:07.138 real 0m8.593s 00:08:07.138 user 0m32.492s 00:08:07.138 sys 0m1.538s 00:08:07.138 02:10:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.138 ************************************ 00:08:07.138 END TEST nvmf_filesystem_in_capsule 00:08:07.138 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:08:07.138 ************************************ 00:08:07.138 02:10:06 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:07.138 02:10:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.138 02:10:06 -- nvmf/common.sh@116 -- # sync 00:08:07.138 02:10:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:07.138 02:10:06 -- nvmf/common.sh@119 -- # set +e 00:08:07.138 02:10:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.138 02:10:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:07.138 rmmod nvme_tcp 00:08:07.138 rmmod nvme_fabrics 00:08:07.398 rmmod nvme_keyring 00:08:07.398 02:10:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.398 02:10:06 -- nvmf/common.sh@123 -- # set -e 00:08:07.398 02:10:06 -- nvmf/common.sh@124 -- # return 0 00:08:07.398 02:10:06 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:07.398 02:10:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:07.398 02:10:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.398 02:10:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:07.398 02:10:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.398 02:10:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.398 02:10:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.398 02:10:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:07.398 00:08:07.398 real 0m18.580s 00:08:07.398 user 1m7.405s 00:08:07.398 sys 0m3.517s 00:08:07.398 02:10:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.398 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:08:07.398 ************************************ 00:08:07.398 END TEST nvmf_filesystem 00:08:07.398 ************************************ 00:08:07.398 02:10:06 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:07.398 02:10:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.398 02:10:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.398 02:10:06 -- common/autotest_common.sh@10 -- # set +x 00:08:07.398 ************************************ 00:08:07.398 START TEST nvmf_discovery 00:08:07.398 ************************************ 00:08:07.398 02:10:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:07.398 * Looking for test storage... 00:08:07.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.398 02:10:06 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.398 02:10:06 -- nvmf/common.sh@7 -- # uname -s 00:08:07.398 02:10:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.398 02:10:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.398 02:10:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.398 02:10:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.398 02:10:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.398 02:10:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.398 02:10:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.398 02:10:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.398 02:10:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.398 02:10:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.398 02:10:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:07.398 02:10:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:07.398 02:10:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.398 02:10:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.398 02:10:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.398 02:10:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.398 02:10:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.398 02:10:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.398 02:10:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.398 02:10:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.398 02:10:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.398 02:10:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.398 02:10:06 -- paths/export.sh@5 -- # export PATH 00:08:07.398 02:10:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.398 02:10:06 -- nvmf/common.sh@46 -- # : 0 00:08:07.398 02:10:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.398 02:10:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.398 02:10:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.398 02:10:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.398 02:10:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.398 02:10:06 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:07.398 02:10:06 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:07.398 02:10:06 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:07.398 02:10:06 -- target/discovery.sh@15 -- # hash nvme 00:08:07.398 02:10:06 -- target/discovery.sh@20 -- # nvmftestinit 00:08:07.398 02:10:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.398 02:10:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.398 02:10:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.398 02:10:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.398 02:10:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.398 02:10:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.398 02:10:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.398 02:10:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.398 02:10:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:07.398 02:10:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:07.398 02:10:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:07.399 02:10:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:07.399 02:10:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:07.399 02:10:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:07.399 02:10:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.399 02:10:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.399 02:10:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:07.399 02:10:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:07.399 02:10:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.399 02:10:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.399 02:10:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.399 02:10:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.399 02:10:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.399 02:10:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.399 02:10:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.399 02:10:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.399 02:10:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:07.399 02:10:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:07.399 Cannot find device "nvmf_tgt_br" 00:08:07.399 02:10:06 -- nvmf/common.sh@154 -- # true 00:08:07.399 02:10:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.656 Cannot find device "nvmf_tgt_br2" 00:08:07.656 02:10:06 -- nvmf/common.sh@155 -- # true 00:08:07.656 02:10:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:07.656 02:10:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:07.656 Cannot find device "nvmf_tgt_br" 00:08:07.656 02:10:06 -- nvmf/common.sh@157 -- # true 00:08:07.656 02:10:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:07.656 Cannot find device "nvmf_tgt_br2" 00:08:07.656 02:10:06 -- nvmf/common.sh@158 -- # true 00:08:07.657 02:10:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:07.657 02:10:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:07.657 02:10:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.657 02:10:07 -- nvmf/common.sh@161 -- # true 00:08:07.657 02:10:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.657 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.657 02:10:07 -- nvmf/common.sh@162 -- # true 00:08:07.657 02:10:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.657 02:10:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.657 02:10:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.657 02:10:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.657 02:10:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.657 02:10:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.657 02:10:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.657 02:10:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:07.657 02:10:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:07.657 02:10:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:07.657 02:10:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:07.657 02:10:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:07.657 02:10:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:07.657 02:10:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.657 02:10:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.657 02:10:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.657 02:10:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:07.657 02:10:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:07.657 02:10:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.657 02:10:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.914 02:10:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.914 02:10:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.914 02:10:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.914 02:10:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:07.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:07.914 00:08:07.914 --- 10.0.0.2 ping statistics --- 00:08:07.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.914 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:07.914 02:10:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:07.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:07.914 00:08:07.914 --- 10.0.0.3 ping statistics --- 00:08:07.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.914 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:07.914 02:10:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:07.914 00:08:07.914 --- 10.0.0.1 ping statistics --- 00:08:07.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.915 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:07.915 02:10:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.915 02:10:07 -- nvmf/common.sh@421 -- # return 0 00:08:07.915 02:10:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:07.915 02:10:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.915 02:10:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:07.915 02:10:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:07.915 02:10:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.915 02:10:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:07.915 02:10:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:07.915 02:10:07 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.915 02:10:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:07.915 02:10:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:07.915 02:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:07.915 02:10:07 -- nvmf/common.sh@469 -- # nvmfpid=72725 00:08:07.915 02:10:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.915 02:10:07 -- nvmf/common.sh@470 -- # waitforlisten 72725 00:08:07.915 02:10:07 -- common/autotest_common.sh@819 -- # '[' -z 72725 ']' 00:08:07.915 02:10:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.915 02:10:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:07.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.915 02:10:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.915 02:10:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:07.915 02:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:07.915 [2024-07-15 02:10:07.333321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:07.915 [2024-07-15 02:10:07.333405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.172 [2024-07-15 02:10:07.475248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.172 [2024-07-15 02:10:07.599984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.172 [2024-07-15 02:10:07.600398] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.172 [2024-07-15 02:10:07.600545] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.172 [2024-07-15 02:10:07.600695] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.172 [2024-07-15 02:10:07.600918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.172 [2024-07-15 02:10:07.601019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.172 [2024-07-15 02:10:07.601670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.172 [2024-07-15 02:10:07.601683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.113 02:10:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:09.113 02:10:08 -- common/autotest_common.sh@852 -- # return 0 00:08:09.113 02:10:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:09.113 02:10:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.113 02:10:08 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 [2024-07-15 02:10:08.374627] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@26 -- # seq 1 4 00:08:09.113 02:10:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.113 02:10:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 Null1 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 [2024-07-15 02:10:08.431238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.113 02:10:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 Null2 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.113 02:10:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 Null3 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.113 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.113 02:10:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:09.113 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.113 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:09.114 02:10:08 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 Null4 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.114 00:08:09.114 Discovery Log Number of Records 6, Generation counter 6 00:08:09.114 =====Discovery Log Entry 0====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: current discovery subsystem 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4420 00:08:09.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: explicit discovery connections, duplicate discovery information 00:08:09.114 sectype: none 00:08:09.114 =====Discovery Log Entry 1====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: nvme subsystem 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4420 00:08:09.114 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: none 00:08:09.114 sectype: none 00:08:09.114 =====Discovery Log Entry 2====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: nvme subsystem 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4420 00:08:09.114 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: none 00:08:09.114 sectype: none 00:08:09.114 =====Discovery Log Entry 3====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: nvme subsystem 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4420 00:08:09.114 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: none 00:08:09.114 sectype: none 00:08:09.114 =====Discovery Log Entry 4====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: nvme subsystem 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4420 00:08:09.114 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: none 00:08:09.114 sectype: none 00:08:09.114 =====Discovery Log Entry 5====== 00:08:09.114 trtype: tcp 00:08:09.114 adrfam: ipv4 00:08:09.114 subtype: discovery subsystem referral 00:08:09.114 treq: not required 00:08:09.114 portid: 0 00:08:09.114 trsvcid: 4430 00:08:09.114 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:09.114 traddr: 10.0.0.2 00:08:09.114 eflags: none 00:08:09.114 sectype: none 00:08:09.114 Perform nvmf subsystem discovery via RPC 00:08:09.114 02:10:08 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:09.114 02:10:08 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 [2024-07-15 02:10:08.619228] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:09.114 [ 00:08:09.114 { 00:08:09.114 "allow_any_host": true, 00:08:09.114 "hosts": [], 00:08:09.114 "listen_addresses": [ 00:08:09.114 { 00:08:09.114 "adrfam": "IPv4", 00:08:09.114 "traddr": "10.0.0.2", 00:08:09.114 "transport": "TCP", 00:08:09.114 "trsvcid": "4420", 00:08:09.114 "trtype": "TCP" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:09.114 "subtype": "Discovery" 00:08:09.114 }, 00:08:09.114 { 00:08:09.114 "allow_any_host": true, 00:08:09.114 "hosts": [], 00:08:09.114 "listen_addresses": [ 00:08:09.114 { 00:08:09.114 "adrfam": "IPv4", 00:08:09.114 "traddr": "10.0.0.2", 00:08:09.114 "transport": "TCP", 00:08:09.114 "trsvcid": "4420", 00:08:09.114 "trtype": "TCP" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "max_cntlid": 65519, 00:08:09.114 "max_namespaces": 32, 00:08:09.114 "min_cntlid": 1, 00:08:09.114 "model_number": "SPDK bdev Controller", 00:08:09.114 "namespaces": [ 00:08:09.114 { 00:08:09.114 "bdev_name": "Null1", 00:08:09.114 "name": "Null1", 00:08:09.114 "nguid": "656F74F4F181484AACFCBC4B30008F1E", 00:08:09.114 "nsid": 1, 00:08:09.114 "uuid": "656f74f4-f181-484a-acfc-bc4b30008f1e" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.114 "serial_number": "SPDK00000000000001", 00:08:09.114 "subtype": "NVMe" 00:08:09.114 }, 00:08:09.114 { 00:08:09.114 "allow_any_host": true, 00:08:09.114 "hosts": [], 00:08:09.114 "listen_addresses": [ 00:08:09.114 { 00:08:09.114 "adrfam": "IPv4", 00:08:09.114 "traddr": "10.0.0.2", 00:08:09.114 "transport": "TCP", 00:08:09.114 "trsvcid": "4420", 00:08:09.114 "trtype": "TCP" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "max_cntlid": 65519, 00:08:09.114 "max_namespaces": 32, 00:08:09.114 "min_cntlid": 1, 00:08:09.114 "model_number": "SPDK bdev Controller", 00:08:09.114 "namespaces": [ 00:08:09.114 { 00:08:09.114 "bdev_name": "Null2", 00:08:09.114 "name": "Null2", 00:08:09.114 "nguid": "A09808B4034B4269BBE2A5765F69B76F", 00:08:09.114 "nsid": 1, 00:08:09.114 "uuid": "a09808b4-034b-4269-bbe2-a5765f69b76f" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:09.114 "serial_number": "SPDK00000000000002", 00:08:09.114 "subtype": "NVMe" 00:08:09.114 }, 00:08:09.114 { 00:08:09.114 "allow_any_host": true, 00:08:09.114 "hosts": [], 00:08:09.114 "listen_addresses": [ 00:08:09.114 { 00:08:09.114 "adrfam": "IPv4", 00:08:09.114 "traddr": "10.0.0.2", 00:08:09.114 "transport": "TCP", 00:08:09.114 "trsvcid": "4420", 00:08:09.114 "trtype": "TCP" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "max_cntlid": 65519, 00:08:09.114 "max_namespaces": 32, 00:08:09.114 "min_cntlid": 1, 00:08:09.114 "model_number": "SPDK bdev Controller", 00:08:09.114 "namespaces": [ 00:08:09.114 { 00:08:09.114 "bdev_name": "Null3", 00:08:09.114 "name": "Null3", 00:08:09.114 "nguid": "7BCB0FEE53B64B658CE92103DC2D2F85", 00:08:09.114 "nsid": 1, 00:08:09.114 "uuid": "7bcb0fee-53b6-4b65-8ce9-2103dc2d2f85" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:09.114 "serial_number": "SPDK00000000000003", 00:08:09.114 "subtype": "NVMe" 00:08:09.114 }, 00:08:09.114 { 00:08:09.114 "allow_any_host": true, 00:08:09.114 "hosts": [], 00:08:09.114 "listen_addresses": [ 00:08:09.114 { 00:08:09.114 "adrfam": "IPv4", 00:08:09.114 "traddr": "10.0.0.2", 00:08:09.114 "transport": "TCP", 00:08:09.114 "trsvcid": "4420", 00:08:09.114 "trtype": "TCP" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "max_cntlid": 65519, 00:08:09.114 "max_namespaces": 32, 00:08:09.114 "min_cntlid": 1, 00:08:09.114 "model_number": "SPDK bdev Controller", 00:08:09.114 "namespaces": [ 00:08:09.114 { 00:08:09.114 "bdev_name": "Null4", 00:08:09.114 "name": "Null4", 00:08:09.114 "nguid": "C5E478A2FB1343ED8AB62EDDD9B72395", 00:08:09.114 "nsid": 1, 00:08:09.114 "uuid": "c5e478a2-fb13-43ed-8ab6-2eddd9b72395" 00:08:09.114 } 00:08:09.114 ], 00:08:09.114 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:09.114 "serial_number": "SPDK00000000000004", 00:08:09.114 "subtype": "NVMe" 00:08:09.114 } 00:08:09.114 ] 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@42 -- # seq 1 4 00:08:09.114 02:10:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.114 02:10:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.114 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.114 02:10:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:09.114 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.114 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.390 02:10:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.390 02:10:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:09.390 02:10:08 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:09.390 02:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.390 02:10:08 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:09.390 02:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:09.390 02:10:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.390 02:10:08 -- target/discovery.sh@49 -- # check_bdevs= 00:08:09.390 02:10:08 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:09.390 02:10:08 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:09.390 02:10:08 -- target/discovery.sh@57 -- # nvmftestfini 00:08:09.390 02:10:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:09.390 02:10:08 -- nvmf/common.sh@116 -- # sync 00:08:09.390 02:10:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:09.390 02:10:08 -- nvmf/common.sh@119 -- # set +e 00:08:09.390 02:10:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:09.390 02:10:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:09.390 rmmod nvme_tcp 00:08:09.390 rmmod nvme_fabrics 00:08:09.390 rmmod nvme_keyring 00:08:09.390 02:10:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:09.390 02:10:08 -- nvmf/common.sh@123 -- # set -e 00:08:09.390 02:10:08 -- nvmf/common.sh@124 -- # return 0 00:08:09.390 02:10:08 -- nvmf/common.sh@477 -- # '[' -n 72725 ']' 00:08:09.390 02:10:08 -- nvmf/common.sh@478 -- # killprocess 72725 00:08:09.390 02:10:08 -- common/autotest_common.sh@926 -- # '[' -z 72725 ']' 00:08:09.390 02:10:08 -- common/autotest_common.sh@930 -- # kill -0 72725 00:08:09.390 02:10:08 -- common/autotest_common.sh@931 -- # uname 00:08:09.390 02:10:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:09.390 02:10:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72725 00:08:09.390 02:10:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:09.390 02:10:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:09.390 02:10:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72725' 00:08:09.390 killing process with pid 72725 00:08:09.390 02:10:08 -- common/autotest_common.sh@945 -- # kill 72725 00:08:09.390 [2024-07-15 02:10:08.881855] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:09.390 02:10:08 -- common/autotest_common.sh@950 -- # wait 72725 00:08:09.648 02:10:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:09.648 02:10:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:09.648 02:10:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:09.648 02:10:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.648 02:10:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:09.648 02:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.648 02:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.648 02:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.648 02:10:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:09.648 00:08:09.648 real 0m2.324s 00:08:09.648 user 0m6.235s 00:08:09.648 sys 0m0.635s 00:08:09.648 02:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.648 02:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 ************************************ 00:08:09.648 END TEST nvmf_discovery 00:08:09.648 ************************************ 00:08:09.648 02:10:09 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.648 02:10:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.648 02:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.648 02:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:09.648 ************************************ 00:08:09.648 START TEST nvmf_referrals 00:08:09.648 ************************************ 00:08:09.648 02:10:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:09.906 * Looking for test storage... 00:08:09.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.906 02:10:09 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.906 02:10:09 -- nvmf/common.sh@7 -- # uname -s 00:08:09.906 02:10:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.906 02:10:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.906 02:10:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.906 02:10:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.906 02:10:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.906 02:10:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.906 02:10:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.906 02:10:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.906 02:10:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.906 02:10:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:09.906 02:10:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:09.906 02:10:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.906 02:10:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.906 02:10:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.906 02:10:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.906 02:10:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.906 02:10:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.906 02:10:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.906 02:10:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.906 02:10:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.906 02:10:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.906 02:10:09 -- paths/export.sh@5 -- # export PATH 00:08:09.906 02:10:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.906 02:10:09 -- nvmf/common.sh@46 -- # : 0 00:08:09.906 02:10:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.906 02:10:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.906 02:10:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.906 02:10:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.906 02:10:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.906 02:10:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.906 02:10:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.906 02:10:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.906 02:10:09 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:09.906 02:10:09 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:09.906 02:10:09 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:09.906 02:10:09 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:09.906 02:10:09 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:09.906 02:10:09 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:09.906 02:10:09 -- target/referrals.sh@37 -- # nvmftestinit 00:08:09.906 02:10:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.906 02:10:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.906 02:10:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.906 02:10:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.906 02:10:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.906 02:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.906 02:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.906 02:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.906 02:10:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:09.906 02:10:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:09.906 02:10:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.907 02:10:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.907 02:10:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.907 02:10:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:09.907 02:10:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.907 02:10:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.907 02:10:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.907 02:10:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.907 02:10:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.907 02:10:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.907 02:10:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.907 02:10:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.907 02:10:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:09.907 02:10:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:09.907 Cannot find device "nvmf_tgt_br" 00:08:09.907 02:10:09 -- nvmf/common.sh@154 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.907 Cannot find device "nvmf_tgt_br2" 00:08:09.907 02:10:09 -- nvmf/common.sh@155 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:09.907 02:10:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:09.907 Cannot find device "nvmf_tgt_br" 00:08:09.907 02:10:09 -- nvmf/common.sh@157 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:09.907 Cannot find device "nvmf_tgt_br2" 00:08:09.907 02:10:09 -- nvmf/common.sh@158 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:09.907 02:10:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:09.907 02:10:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.907 02:10:09 -- nvmf/common.sh@161 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.907 02:10:09 -- nvmf/common.sh@162 -- # true 00:08:09.907 02:10:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.907 02:10:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.907 02:10:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.907 02:10:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.907 02:10:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.907 02:10:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.164 02:10:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.164 02:10:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.164 02:10:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.164 02:10:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:10.164 02:10:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:10.164 02:10:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:10.164 02:10:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:10.164 02:10:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.164 02:10:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.164 02:10:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.164 02:10:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:10.164 02:10:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:10.164 02:10:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.164 02:10:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.164 02:10:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.164 02:10:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.164 02:10:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.164 02:10:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:10.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:10.164 00:08:10.164 --- 10.0.0.2 ping statistics --- 00:08:10.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.164 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:10.164 02:10:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:10.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:08:10.164 00:08:10.164 --- 10.0.0.3 ping statistics --- 00:08:10.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.164 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:10.164 02:10:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:10.164 00:08:10.164 --- 10.0.0.1 ping statistics --- 00:08:10.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.164 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:10.164 02:10:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.164 02:10:09 -- nvmf/common.sh@421 -- # return 0 00:08:10.164 02:10:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:10.164 02:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.164 02:10:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:10.164 02:10:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:10.164 02:10:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.165 02:10:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:10.165 02:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:10.165 02:10:09 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:10.165 02:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:10.165 02:10:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.165 02:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:10.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.165 02:10:09 -- nvmf/common.sh@469 -- # nvmfpid=72958 00:08:10.165 02:10:09 -- nvmf/common.sh@470 -- # waitforlisten 72958 00:08:10.165 02:10:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.165 02:10:09 -- common/autotest_common.sh@819 -- # '[' -z 72958 ']' 00:08:10.165 02:10:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.165 02:10:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.165 02:10:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.165 02:10:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.165 02:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:10.165 [2024-07-15 02:10:09.699136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:10.165 [2024-07-15 02:10:09.699230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.422 [2024-07-15 02:10:09.835807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.422 [2024-07-15 02:10:09.921268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.422 [2024-07-15 02:10:09.921441] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.422 [2024-07-15 02:10:09.921456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.422 [2024-07-15 02:10:09.921468] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.422 [2024-07-15 02:10:09.921554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.422 [2024-07-15 02:10:09.922040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.422 [2024-07-15 02:10:09.922092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.422 [2024-07-15 02:10:09.922097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.354 02:10:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.354 02:10:10 -- common/autotest_common.sh@852 -- # return 0 00:08:11.354 02:10:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.354 02:10:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.354 02:10:10 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 [2024-07-15 02:10:10.709460] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 [2024-07-15 02:10:10.731393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.354 02:10:10 -- target/referrals.sh@48 -- # jq length 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:11.354 02:10:10 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:11.354 02:10:10 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.354 02:10:10 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.354 02:10:10 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.354 02:10:10 -- target/referrals.sh@21 -- # sort 00:08:11.354 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.354 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.354 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.354 02:10:10 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.354 02:10:10 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:11.354 02:10:10 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.354 02:10:10 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.354 02:10:10 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.354 02:10:10 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.354 02:10:10 -- target/referrals.sh@26 -- # sort 00:08:11.612 02:10:10 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:11.612 02:10:10 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:11.612 02:10:10 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:11.612 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.612 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.612 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.612 02:10:10 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:11.612 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.613 02:10:10 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:11.613 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.613 02:10:10 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.613 02:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:10 -- target/referrals.sh@56 -- # jq length 00:08:11.613 02:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.613 02:10:11 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:11.613 02:10:11 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:11.613 02:10:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.613 02:10:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.613 02:10:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.613 02:10:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.613 02:10:11 -- target/referrals.sh@26 -- # sort 00:08:11.613 02:10:11 -- target/referrals.sh@26 -- # echo 00:08:11.613 02:10:11 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:11.613 02:10:11 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:11.613 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.613 02:10:11 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.613 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.613 02:10:11 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:11.613 02:10:11 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.613 02:10:11 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.613 02:10:11 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.613 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.613 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.613 02:10:11 -- target/referrals.sh@21 -- # sort 00:08:11.613 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.871 02:10:11 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:11.871 02:10:11 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.871 02:10:11 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:11.871 02:10:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.871 02:10:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.871 02:10:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.871 02:10:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.871 02:10:11 -- target/referrals.sh@26 -- # sort 00:08:11.871 02:10:11 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:11.871 02:10:11 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:11.871 02:10:11 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:11.871 02:10:11 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.871 02:10:11 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:11.871 02:10:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.871 02:10:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:11.871 02:10:11 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:11.871 02:10:11 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:11.871 02:10:11 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:11.871 02:10:11 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:11.871 02:10:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:11.871 02:10:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.871 02:10:11 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:11.871 02:10:11 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.871 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.871 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.871 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.871 02:10:11 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:11.871 02:10:11 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.871 02:10:11 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.871 02:10:11 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.871 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.871 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:11.871 02:10:11 -- target/referrals.sh@21 -- # sort 00:08:11.871 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.129 02:10:11 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:12.129 02:10:11 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.129 02:10:11 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:12.129 02:10:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.129 02:10:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.129 02:10:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.129 02:10:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.129 02:10:11 -- target/referrals.sh@26 -- # sort 00:08:12.129 02:10:11 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:12.129 02:10:11 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:12.129 02:10:11 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:12.129 02:10:11 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:12.129 02:10:11 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:12.129 02:10:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.129 02:10:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:12.129 02:10:11 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:12.129 02:10:11 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:12.129 02:10:11 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:12.129 02:10:11 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:12.129 02:10:11 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.129 02:10:11 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:12.129 02:10:11 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:12.129 02:10:11 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:12.129 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.129 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:12.129 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.129 02:10:11 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:12.129 02:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.129 02:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:12.129 02:10:11 -- target/referrals.sh@82 -- # jq length 00:08:12.129 02:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.387 02:10:11 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:12.387 02:10:11 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:12.387 02:10:11 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:12.387 02:10:11 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:12.387 02:10:11 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:12.387 02:10:11 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:12.387 02:10:11 -- target/referrals.sh@26 -- # sort 00:08:12.387 02:10:11 -- target/referrals.sh@26 -- # echo 00:08:12.387 02:10:11 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:12.387 02:10:11 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:12.387 02:10:11 -- target/referrals.sh@86 -- # nvmftestfini 00:08:12.387 02:10:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:12.387 02:10:11 -- nvmf/common.sh@116 -- # sync 00:08:12.387 02:10:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:12.387 02:10:11 -- nvmf/common.sh@119 -- # set +e 00:08:12.387 02:10:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:12.387 02:10:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:12.387 rmmod nvme_tcp 00:08:12.387 rmmod nvme_fabrics 00:08:12.387 rmmod nvme_keyring 00:08:12.387 02:10:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:12.387 02:10:11 -- nvmf/common.sh@123 -- # set -e 00:08:12.387 02:10:11 -- nvmf/common.sh@124 -- # return 0 00:08:12.387 02:10:11 -- nvmf/common.sh@477 -- # '[' -n 72958 ']' 00:08:12.387 02:10:11 -- nvmf/common.sh@478 -- # killprocess 72958 00:08:12.387 02:10:11 -- common/autotest_common.sh@926 -- # '[' -z 72958 ']' 00:08:12.387 02:10:11 -- common/autotest_common.sh@930 -- # kill -0 72958 00:08:12.387 02:10:11 -- common/autotest_common.sh@931 -- # uname 00:08:12.387 02:10:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:12.387 02:10:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72958 00:08:12.387 killing process with pid 72958 00:08:12.387 02:10:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:12.387 02:10:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:12.387 02:10:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72958' 00:08:12.387 02:10:11 -- common/autotest_common.sh@945 -- # kill 72958 00:08:12.387 02:10:11 -- common/autotest_common.sh@950 -- # wait 72958 00:08:12.644 02:10:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:12.644 02:10:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:12.644 02:10:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:12.644 02:10:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.644 02:10:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:12.644 02:10:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.644 02:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.644 02:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.644 02:10:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:12.644 ************************************ 00:08:12.644 END TEST nvmf_referrals 00:08:12.644 ************************************ 00:08:12.644 00:08:12.644 real 0m2.975s 00:08:12.644 user 0m9.632s 00:08:12.644 sys 0m0.806s 00:08:12.644 02:10:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.644 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:12.644 02:10:12 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.644 02:10:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.644 02:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.644 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:12.644 ************************************ 00:08:12.644 START TEST nvmf_connect_disconnect 00:08:12.644 ************************************ 00:08:12.644 02:10:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:12.902 * Looking for test storage... 00:08:12.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.902 02:10:12 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.902 02:10:12 -- nvmf/common.sh@7 -- # uname -s 00:08:12.902 02:10:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.902 02:10:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.902 02:10:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.902 02:10:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.902 02:10:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.902 02:10:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.902 02:10:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.902 02:10:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.902 02:10:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.902 02:10:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.902 02:10:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:12.902 02:10:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:08:12.902 02:10:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.902 02:10:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.902 02:10:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.902 02:10:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.902 02:10:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.902 02:10:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.902 02:10:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.903 02:10:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.903 02:10:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.903 02:10:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.903 02:10:12 -- paths/export.sh@5 -- # export PATH 00:08:12.903 02:10:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.903 02:10:12 -- nvmf/common.sh@46 -- # : 0 00:08:12.903 02:10:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:12.903 02:10:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:12.903 02:10:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:12.903 02:10:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.903 02:10:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.903 02:10:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:12.903 02:10:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:12.903 02:10:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:12.903 02:10:12 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.903 02:10:12 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.903 02:10:12 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:12.903 02:10:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:12.903 02:10:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.903 02:10:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:12.903 02:10:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:12.903 02:10:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:12.903 02:10:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.903 02:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.903 02:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.903 02:10:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:12.903 02:10:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:12.903 02:10:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:12.903 02:10:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:12.903 02:10:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:12.903 02:10:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:12.903 02:10:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.903 02:10:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.903 02:10:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.903 02:10:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:12.903 02:10:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.903 02:10:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.903 02:10:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.903 02:10:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.903 02:10:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.903 02:10:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.903 02:10:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.903 02:10:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.903 02:10:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:12.903 02:10:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:12.903 Cannot find device "nvmf_tgt_br" 00:08:12.903 02:10:12 -- nvmf/common.sh@154 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.903 Cannot find device "nvmf_tgt_br2" 00:08:12.903 02:10:12 -- nvmf/common.sh@155 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:12.903 02:10:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:12.903 Cannot find device "nvmf_tgt_br" 00:08:12.903 02:10:12 -- nvmf/common.sh@157 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:12.903 Cannot find device "nvmf_tgt_br2" 00:08:12.903 02:10:12 -- nvmf/common.sh@158 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:12.903 02:10:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:12.903 02:10:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.903 02:10:12 -- nvmf/common.sh@161 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.903 02:10:12 -- nvmf/common.sh@162 -- # true 00:08:12.903 02:10:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.903 02:10:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.903 02:10:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.903 02:10:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.903 02:10:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.162 02:10:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.162 02:10:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.162 02:10:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.162 02:10:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.162 02:10:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:13.162 02:10:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:13.162 02:10:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:13.162 02:10:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:13.162 02:10:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.162 02:10:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.162 02:10:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.162 02:10:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:13.162 02:10:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:13.162 02:10:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.162 02:10:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.162 02:10:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.162 02:10:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.162 02:10:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.162 02:10:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:13.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:08:13.162 00:08:13.162 --- 10.0.0.2 ping statistics --- 00:08:13.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.162 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:13.162 02:10:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:13.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:13.162 00:08:13.162 --- 10.0.0.3 ping statistics --- 00:08:13.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.162 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:13.162 02:10:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:13.162 00:08:13.162 --- 10.0.0.1 ping statistics --- 00:08:13.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.162 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:13.162 02:10:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.162 02:10:12 -- nvmf/common.sh@421 -- # return 0 00:08:13.162 02:10:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:13.162 02:10:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.162 02:10:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:13.162 02:10:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:13.162 02:10:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.162 02:10:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:13.162 02:10:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:13.162 02:10:12 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:13.162 02:10:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:13.162 02:10:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:13.162 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.162 02:10:12 -- nvmf/common.sh@469 -- # nvmfpid=73259 00:08:13.162 02:10:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.162 02:10:12 -- nvmf/common.sh@470 -- # waitforlisten 73259 00:08:13.162 02:10:12 -- common/autotest_common.sh@819 -- # '[' -z 73259 ']' 00:08:13.162 02:10:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.162 02:10:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.162 02:10:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.162 02:10:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.162 02:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:13.162 [2024-07-15 02:10:12.676817] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:13.162 [2024-07-15 02:10:12.676889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.421 [2024-07-15 02:10:12.813019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.421 [2024-07-15 02:10:12.907456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.421 [2024-07-15 02:10:12.907600] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.421 [2024-07-15 02:10:12.907638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.421 [2024-07-15 02:10:12.907665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.421 [2024-07-15 02:10:12.907754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.421 [2024-07-15 02:10:12.908094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.421 [2024-07-15 02:10:12.908565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.421 [2024-07-15 02:10:12.908575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.353 02:10:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.353 02:10:13 -- common/autotest_common.sh@852 -- # return 0 00:08:14.353 02:10:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.353 02:10:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 02:10:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:14.353 02:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 [2024-07-15 02:10:13.701457] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.353 02:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:14.353 02:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 02:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:14.353 02:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 02:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.353 02:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 02:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.353 02:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:14.353 02:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:14.353 [2024-07-15 02:10:13.777141] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.353 02:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:14.353 02:10:13 -- target/connect_disconnect.sh@34 -- # set +x 00:08:16.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.227 02:13:57 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:58.227 02:13:57 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:58.227 02:13:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:58.227 02:13:57 -- nvmf/common.sh@116 -- # sync 00:11:58.227 02:13:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:58.227 02:13:57 -- nvmf/common.sh@119 -- # set +e 00:11:58.227 02:13:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:58.227 02:13:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:58.227 rmmod nvme_tcp 00:11:58.227 rmmod nvme_fabrics 00:11:58.227 rmmod nvme_keyring 00:11:58.227 02:13:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:58.227 02:13:57 -- nvmf/common.sh@123 -- # set -e 00:11:58.227 02:13:57 -- nvmf/common.sh@124 -- # return 0 00:11:58.227 02:13:57 -- nvmf/common.sh@477 -- # '[' -n 73259 ']' 00:11:58.227 02:13:57 -- nvmf/common.sh@478 -- # killprocess 73259 00:11:58.227 02:13:57 -- common/autotest_common.sh@926 -- # '[' -z 73259 ']' 00:11:58.227 02:13:57 -- common/autotest_common.sh@930 -- # kill -0 73259 00:11:58.227 02:13:57 -- common/autotest_common.sh@931 -- # uname 00:11:58.227 02:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:58.227 02:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73259 00:11:58.227 killing process with pid 73259 00:11:58.227 02:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:58.227 02:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:58.227 02:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73259' 00:11:58.227 02:13:57 -- common/autotest_common.sh@945 -- # kill 73259 00:11:58.227 02:13:57 -- common/autotest_common.sh@950 -- # wait 73259 00:11:58.227 02:13:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:58.228 02:13:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:58.228 02:13:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:58.228 02:13:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.228 02:13:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:58.228 02:13:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.228 02:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.228 02:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.228 02:13:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:58.228 00:11:58.228 real 3m45.555s 00:11:58.228 user 14m37.887s 00:11:58.228 sys 0m23.051s 00:11:58.228 02:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.228 ************************************ 00:11:58.228 END TEST nvmf_connect_disconnect 00:11:58.228 02:13:57 -- common/autotest_common.sh@10 -- # set +x 00:11:58.228 ************************************ 00:11:58.487 02:13:57 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:58.487 02:13:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:58.487 02:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.487 02:13:57 -- common/autotest_common.sh@10 -- # set +x 00:11:58.487 ************************************ 00:11:58.487 START TEST nvmf_multitarget 00:11:58.487 ************************************ 00:11:58.487 02:13:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:58.487 * Looking for test storage... 00:11:58.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:58.487 02:13:57 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:58.487 02:13:57 -- nvmf/common.sh@7 -- # uname -s 00:11:58.487 02:13:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.487 02:13:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.487 02:13:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.487 02:13:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.487 02:13:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.487 02:13:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.487 02:13:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.487 02:13:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.487 02:13:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.487 02:13:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:11:58.487 02:13:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:11:58.487 02:13:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.487 02:13:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.487 02:13:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:58.487 02:13:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:58.487 02:13:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.487 02:13:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.487 02:13:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.487 02:13:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.487 02:13:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.487 02:13:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.487 02:13:57 -- paths/export.sh@5 -- # export PATH 00:11:58.487 02:13:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.487 02:13:57 -- nvmf/common.sh@46 -- # : 0 00:11:58.487 02:13:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:58.487 02:13:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:58.487 02:13:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:58.487 02:13:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.487 02:13:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.487 02:13:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:58.487 02:13:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:58.487 02:13:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:58.487 02:13:57 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:58.487 02:13:57 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:58.487 02:13:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:58.487 02:13:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.487 02:13:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:58.487 02:13:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:58.487 02:13:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:58.487 02:13:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.487 02:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.487 02:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.487 02:13:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:58.487 02:13:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:58.487 02:13:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.487 02:13:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.487 02:13:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:58.487 02:13:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:58.487 02:13:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:58.487 02:13:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:58.487 02:13:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:58.487 02:13:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.487 02:13:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:58.487 02:13:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:58.487 02:13:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:58.487 02:13:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:58.487 02:13:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:58.487 02:13:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:58.487 Cannot find device "nvmf_tgt_br" 00:11:58.487 02:13:57 -- nvmf/common.sh@154 -- # true 00:11:58.487 02:13:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:58.487 Cannot find device "nvmf_tgt_br2" 00:11:58.487 02:13:57 -- nvmf/common.sh@155 -- # true 00:11:58.487 02:13:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:58.487 02:13:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:58.487 Cannot find device "nvmf_tgt_br" 00:11:58.487 02:13:57 -- nvmf/common.sh@157 -- # true 00:11:58.487 02:13:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:58.487 Cannot find device "nvmf_tgt_br2" 00:11:58.487 02:13:57 -- nvmf/common.sh@158 -- # true 00:11:58.487 02:13:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:58.487 02:13:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:58.746 02:13:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:58.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.746 02:13:58 -- nvmf/common.sh@161 -- # true 00:11:58.746 02:13:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:58.746 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:58.746 02:13:58 -- nvmf/common.sh@162 -- # true 00:11:58.746 02:13:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:58.746 02:13:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:58.746 02:13:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:58.746 02:13:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:58.746 02:13:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:58.746 02:13:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:58.746 02:13:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:58.746 02:13:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:58.746 02:13:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:58.746 02:13:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:58.746 02:13:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:58.746 02:13:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:58.746 02:13:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:58.746 02:13:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:58.746 02:13:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:58.746 02:13:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:58.746 02:13:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:58.746 02:13:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:58.746 02:13:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:58.746 02:13:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:58.746 02:13:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:58.746 02:13:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:58.746 02:13:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:58.746 02:13:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:58.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:11:58.746 00:11:58.746 --- 10.0.0.2 ping statistics --- 00:11:58.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.746 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:58.746 02:13:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:58.746 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:58.746 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:58.746 00:11:58.746 --- 10.0.0.3 ping statistics --- 00:11:58.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.746 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:58.746 02:13:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:58.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:58.746 00:11:58.746 --- 10.0.0.1 ping statistics --- 00:11:58.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.746 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:58.746 02:13:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.746 02:13:58 -- nvmf/common.sh@421 -- # return 0 00:11:58.746 02:13:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:58.746 02:13:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.746 02:13:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:58.746 02:13:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:58.746 02:13:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.746 02:13:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:58.746 02:13:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:58.746 02:13:58 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:58.746 02:13:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:58.746 02:13:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:58.746 02:13:58 -- common/autotest_common.sh@10 -- # set +x 00:11:58.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.746 02:13:58 -- nvmf/common.sh@469 -- # nvmfpid=77033 00:11:58.746 02:13:58 -- nvmf/common.sh@470 -- # waitforlisten 77033 00:11:58.746 02:13:58 -- common/autotest_common.sh@819 -- # '[' -z 77033 ']' 00:11:58.746 02:13:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.746 02:13:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:58.746 02:13:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.746 02:13:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.746 02:13:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:58.746 02:13:58 -- common/autotest_common.sh@10 -- # set +x 00:11:59.004 [2024-07-15 02:13:58.361302] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:59.004 [2024-07-15 02:13:58.361394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.004 [2024-07-15 02:13:58.502492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.263 [2024-07-15 02:13:58.605680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:59.263 [2024-07-15 02:13:58.606305] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.263 [2024-07-15 02:13:58.606525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.263 [2024-07-15 02:13:58.606933] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.263 [2024-07-15 02:13:58.607328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.263 [2024-07-15 02:13:58.607490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.263 [2024-07-15 02:13:58.607802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.263 [2024-07-15 02:13:58.607592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.829 02:13:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:59.829 02:13:59 -- common/autotest_common.sh@852 -- # return 0 00:11:59.829 02:13:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:59.829 02:13:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:59.829 02:13:59 -- common/autotest_common.sh@10 -- # set +x 00:12:00.087 02:13:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.087 02:13:59 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:00.087 02:13:59 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.087 02:13:59 -- target/multitarget.sh@21 -- # jq length 00:12:00.087 02:13:59 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:00.087 02:13:59 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:00.345 "nvmf_tgt_1" 00:12:00.345 02:13:59 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:00.345 "nvmf_tgt_2" 00:12:00.345 02:13:59 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.345 02:13:59 -- target/multitarget.sh@28 -- # jq length 00:12:00.603 02:13:59 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:00.603 02:13:59 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:00.603 true 00:12:00.603 02:14:00 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:00.862 true 00:12:00.862 02:14:00 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.862 02:14:00 -- target/multitarget.sh@35 -- # jq length 00:12:00.862 02:14:00 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:00.862 02:14:00 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:00.862 02:14:00 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:00.862 02:14:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:00.862 02:14:00 -- nvmf/common.sh@116 -- # sync 00:12:00.862 02:14:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:00.862 02:14:00 -- nvmf/common.sh@119 -- # set +e 00:12:00.862 02:14:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:00.862 02:14:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:00.862 rmmod nvme_tcp 00:12:00.862 rmmod nvme_fabrics 00:12:00.862 rmmod nvme_keyring 00:12:01.120 02:14:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:01.120 02:14:00 -- nvmf/common.sh@123 -- # set -e 00:12:01.120 02:14:00 -- nvmf/common.sh@124 -- # return 0 00:12:01.120 02:14:00 -- nvmf/common.sh@477 -- # '[' -n 77033 ']' 00:12:01.120 02:14:00 -- nvmf/common.sh@478 -- # killprocess 77033 00:12:01.120 02:14:00 -- common/autotest_common.sh@926 -- # '[' -z 77033 ']' 00:12:01.120 02:14:00 -- common/autotest_common.sh@930 -- # kill -0 77033 00:12:01.120 02:14:00 -- common/autotest_common.sh@931 -- # uname 00:12:01.120 02:14:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:01.120 02:14:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77033 00:12:01.120 killing process with pid 77033 00:12:01.120 02:14:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:01.120 02:14:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:01.120 02:14:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77033' 00:12:01.120 02:14:00 -- common/autotest_common.sh@945 -- # kill 77033 00:12:01.120 02:14:00 -- common/autotest_common.sh@950 -- # wait 77033 00:12:01.378 02:14:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:01.378 02:14:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:01.378 02:14:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:01.378 02:14:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.378 02:14:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:01.378 02:14:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.378 02:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.378 02:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.378 02:14:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:01.378 ************************************ 00:12:01.378 END TEST nvmf_multitarget 00:12:01.378 ************************************ 00:12:01.378 00:12:01.378 real 0m2.993s 00:12:01.378 user 0m9.713s 00:12:01.378 sys 0m0.741s 00:12:01.378 02:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.378 02:14:00 -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 02:14:00 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:01.378 02:14:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:01.378 02:14:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:01.378 02:14:00 -- common/autotest_common.sh@10 -- # set +x 00:12:01.378 ************************************ 00:12:01.378 START TEST nvmf_rpc 00:12:01.378 ************************************ 00:12:01.378 02:14:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:01.378 * Looking for test storage... 00:12:01.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:01.378 02:14:00 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:01.378 02:14:00 -- nvmf/common.sh@7 -- # uname -s 00:12:01.636 02:14:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.637 02:14:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.637 02:14:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.637 02:14:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.637 02:14:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.637 02:14:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.637 02:14:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.637 02:14:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.637 02:14:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.637 02:14:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:01.637 02:14:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:01.637 02:14:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.637 02:14:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.637 02:14:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:01.637 02:14:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.637 02:14:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.637 02:14:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.637 02:14:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.637 02:14:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.637 02:14:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.637 02:14:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.637 02:14:00 -- paths/export.sh@5 -- # export PATH 00:12:01.637 02:14:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.637 02:14:00 -- nvmf/common.sh@46 -- # : 0 00:12:01.637 02:14:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:01.637 02:14:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:01.637 02:14:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:01.637 02:14:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.637 02:14:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.637 02:14:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:01.637 02:14:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:01.637 02:14:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:01.637 02:14:00 -- target/rpc.sh@11 -- # loops=5 00:12:01.637 02:14:00 -- target/rpc.sh@23 -- # nvmftestinit 00:12:01.637 02:14:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:01.637 02:14:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.637 02:14:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:01.637 02:14:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:01.637 02:14:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:01.637 02:14:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.637 02:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.637 02:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.637 02:14:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:01.637 02:14:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:01.637 02:14:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.637 02:14:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.637 02:14:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:01.637 02:14:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:01.637 02:14:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:01.637 02:14:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:01.637 02:14:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:01.637 02:14:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.637 02:14:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:01.637 02:14:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:01.637 02:14:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:01.637 02:14:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:01.637 02:14:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:01.637 02:14:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:01.637 Cannot find device "nvmf_tgt_br" 00:12:01.637 02:14:00 -- nvmf/common.sh@154 -- # true 00:12:01.637 02:14:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:01.637 Cannot find device "nvmf_tgt_br2" 00:12:01.637 02:14:01 -- nvmf/common.sh@155 -- # true 00:12:01.637 02:14:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:01.637 02:14:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:01.637 Cannot find device "nvmf_tgt_br" 00:12:01.637 02:14:01 -- nvmf/common.sh@157 -- # true 00:12:01.637 02:14:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:01.637 Cannot find device "nvmf_tgt_br2" 00:12:01.637 02:14:01 -- nvmf/common.sh@158 -- # true 00:12:01.637 02:14:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:01.637 02:14:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:01.637 02:14:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:01.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.637 02:14:01 -- nvmf/common.sh@161 -- # true 00:12:01.637 02:14:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:01.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.637 02:14:01 -- nvmf/common.sh@162 -- # true 00:12:01.637 02:14:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:01.637 02:14:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:01.637 02:14:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:01.637 02:14:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:01.637 02:14:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:01.637 02:14:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:01.637 02:14:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:01.637 02:14:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:01.637 02:14:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:01.637 02:14:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:01.637 02:14:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:01.637 02:14:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:01.895 02:14:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:01.895 02:14:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:01.895 02:14:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:01.895 02:14:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:01.895 02:14:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:01.895 02:14:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:01.895 02:14:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:01.895 02:14:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:01.895 02:14:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:01.895 02:14:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:01.895 02:14:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:01.896 02:14:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:01.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:12:01.896 00:12:01.896 --- 10.0.0.2 ping statistics --- 00:12:01.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.896 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:01.896 02:14:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:01.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:01.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:12:01.896 00:12:01.896 --- 10.0.0.3 ping statistics --- 00:12:01.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.896 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:01.896 02:14:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:01.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:12:01.896 00:12:01.896 --- 10.0.0.1 ping statistics --- 00:12:01.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.896 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:01.896 02:14:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.896 02:14:01 -- nvmf/common.sh@421 -- # return 0 00:12:01.896 02:14:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:01.896 02:14:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.896 02:14:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:01.896 02:14:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:01.896 02:14:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.896 02:14:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:01.896 02:14:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:01.896 02:14:01 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:01.896 02:14:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:01.896 02:14:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:01.896 02:14:01 -- common/autotest_common.sh@10 -- # set +x 00:12:01.896 02:14:01 -- nvmf/common.sh@469 -- # nvmfpid=77267 00:12:01.896 02:14:01 -- nvmf/common.sh@470 -- # waitforlisten 77267 00:12:01.896 02:14:01 -- common/autotest_common.sh@819 -- # '[' -z 77267 ']' 00:12:01.896 02:14:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.896 02:14:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.896 02:14:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.896 02:14:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.896 02:14:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:01.896 02:14:01 -- common/autotest_common.sh@10 -- # set +x 00:12:01.896 [2024-07-15 02:14:01.379261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:01.896 [2024-07-15 02:14:01.379353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.154 [2024-07-15 02:14:01.515821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.154 [2024-07-15 02:14:01.612194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:02.154 [2024-07-15 02:14:01.612553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.154 [2024-07-15 02:14:01.612602] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.154 [2024-07-15 02:14:01.612754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.154 [2024-07-15 02:14:01.612879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.154 [2024-07-15 02:14:01.613000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.154 [2024-07-15 02:14:01.613726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.154 [2024-07-15 02:14:01.613735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.088 02:14:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:03.088 02:14:02 -- common/autotest_common.sh@852 -- # return 0 00:12:03.088 02:14:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.088 02:14:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:03.088 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 02:14:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.088 02:14:02 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:03.088 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.088 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.088 02:14:02 -- target/rpc.sh@26 -- # stats='{ 00:12:03.088 "poll_groups": [ 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_0", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_1", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_2", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_3", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [] 00:12:03.088 } 00:12:03.088 ], 00:12:03.088 "tick_rate": 2200000000 00:12:03.088 }' 00:12:03.088 02:14:02 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:03.088 02:14:02 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:03.088 02:14:02 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:03.088 02:14:02 -- target/rpc.sh@15 -- # wc -l 00:12:03.088 02:14:02 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:03.088 02:14:02 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:03.088 02:14:02 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:03.088 02:14:02 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.088 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.088 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 [2024-07-15 02:14:02.533457] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.088 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.088 02:14:02 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:03.088 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.088 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.088 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.088 02:14:02 -- target/rpc.sh@33 -- # stats='{ 00:12:03.088 "poll_groups": [ 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_0", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [ 00:12:03.088 { 00:12:03.088 "trtype": "TCP" 00:12:03.088 } 00:12:03.088 ] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_1", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [ 00:12:03.088 { 00:12:03.088 "trtype": "TCP" 00:12:03.088 } 00:12:03.088 ] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_2", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [ 00:12:03.088 { 00:12:03.088 "trtype": "TCP" 00:12:03.088 } 00:12:03.088 ] 00:12:03.088 }, 00:12:03.088 { 00:12:03.088 "admin_qpairs": 0, 00:12:03.088 "completed_nvme_io": 0, 00:12:03.088 "current_admin_qpairs": 0, 00:12:03.088 "current_io_qpairs": 0, 00:12:03.088 "io_qpairs": 0, 00:12:03.088 "name": "nvmf_tgt_poll_group_3", 00:12:03.088 "pending_bdev_io": 0, 00:12:03.088 "transports": [ 00:12:03.088 { 00:12:03.088 "trtype": "TCP" 00:12:03.088 } 00:12:03.088 ] 00:12:03.088 } 00:12:03.088 ], 00:12:03.088 "tick_rate": 2200000000 00:12:03.088 }' 00:12:03.088 02:14:02 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.088 02:14:02 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:03.088 02:14:02 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:03.088 02:14:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:03.346 02:14:02 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:03.346 02:14:02 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:03.346 02:14:02 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:03.346 02:14:02 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:03.346 02:14:02 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:03.346 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.346 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 Malloc1 00:12:03.346 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.346 02:14:02 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.346 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.346 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.346 02:14:02 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.346 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.346 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.346 02:14:02 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:03.346 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.346 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.346 02:14:02 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.346 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.346 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.346 [2024-07-15 02:14:02.757538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.346 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.346 02:14:02 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 -a 10.0.0.2 -s 4420 00:12:03.346 02:14:02 -- common/autotest_common.sh@640 -- # local es=0 00:12:03.346 02:14:02 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 -a 10.0.0.2 -s 4420 00:12:03.346 02:14:02 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:03.346 02:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.346 02:14:02 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:03.346 02:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.346 02:14:02 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:03.346 02:14:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.346 02:14:02 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:03.346 02:14:02 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:03.346 02:14:02 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 -a 10.0.0.2 -s 4420 00:12:03.346 [2024-07-15 02:14:02.782014] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1' 00:12:03.346 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:03.346 could not add new controller: failed to write to nvme-fabrics device 00:12:03.346 02:14:02 -- common/autotest_common.sh@643 -- # es=1 00:12:03.347 02:14:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:03.347 02:14:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:03.347 02:14:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:03.347 02:14:02 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:03.347 02:14:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.347 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:12:03.347 02:14:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:03.347 02:14:02 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.604 02:14:02 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.604 02:14:02 -- common/autotest_common.sh@1177 -- # local i=0 00:12:03.604 02:14:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.604 02:14:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:03.604 02:14:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:05.500 02:14:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:05.500 02:14:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:05.501 02:14:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.501 02:14:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:05.501 02:14:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.501 02:14:04 -- common/autotest_common.sh@1187 -- # return 0 00:12:05.501 02:14:04 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.501 02:14:05 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.501 02:14:05 -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.501 02:14:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:05.501 02:14:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.501 02:14:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.501 02:14:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:05.811 02:14:05 -- common/autotest_common.sh@1210 -- # return 0 00:12:05.811 02:14:05 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:05.811 02:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.811 02:14:05 -- common/autotest_common.sh@10 -- # set +x 00:12:05.811 02:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.811 02:14:05 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.811 02:14:05 -- common/autotest_common.sh@640 -- # local es=0 00:12:05.811 02:14:05 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.811 02:14:05 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:05.811 02:14:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:05.811 02:14:05 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:05.811 02:14:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:05.811 02:14:05 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:05.811 02:14:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:05.811 02:14:05 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:05.811 02:14:05 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:05.811 02:14:05 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.811 [2024-07-15 02:14:05.093909] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1' 00:12:05.811 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:05.811 could not add new controller: failed to write to nvme-fabrics device 00:12:05.811 02:14:05 -- common/autotest_common.sh@643 -- # es=1 00:12:05.811 02:14:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:05.811 02:14:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:05.811 02:14:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:05.811 02:14:05 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:05.811 02:14:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:05.811 02:14:05 -- common/autotest_common.sh@10 -- # set +x 00:12:05.811 02:14:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:05.811 02:14:05 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.811 02:14:05 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.811 02:14:05 -- common/autotest_common.sh@1177 -- # local i=0 00:12:05.811 02:14:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.811 02:14:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:05.811 02:14:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:08.371 02:14:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:08.371 02:14:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:08.371 02:14:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:08.371 02:14:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.371 02:14:07 -- common/autotest_common.sh@1187 -- # return 0 00:12:08.371 02:14:07 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.371 02:14:07 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.371 02:14:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:08.371 02:14:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:08.371 02:14:07 -- common/autotest_common.sh@1210 -- # return 0 00:12:08.371 02:14:07 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.371 02:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.371 02:14:07 -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 02:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.371 02:14:07 -- target/rpc.sh@81 -- # seq 1 5 00:12:08.371 02:14:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:08.371 02:14:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.371 02:14:07 -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 02:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.371 02:14:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.371 02:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.371 02:14:07 -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 [2024-07-15 02:14:07.395390] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.371 02:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.371 02:14:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:08.371 02:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.371 02:14:07 -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 02:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.371 02:14:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:08.371 02:14:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.371 02:14:07 -- common/autotest_common.sh@10 -- # set +x 00:12:08.371 02:14:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.371 02:14:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:08.371 02:14:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.371 02:14:07 -- common/autotest_common.sh@1177 -- # local i=0 00:12:08.371 02:14:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.371 02:14:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:08.371 02:14:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:10.277 02:14:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:10.277 02:14:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:10.277 02:14:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.277 02:14:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:10.277 02:14:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.277 02:14:09 -- common/autotest_common.sh@1187 -- # return 0 00:12:10.277 02:14:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.277 02:14:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.277 02:14:09 -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.277 02:14:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:10.277 02:14:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.277 02:14:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:10.277 02:14:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.277 02:14:09 -- common/autotest_common.sh@1210 -- # return 0 00:12:10.277 02:14:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.277 02:14:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 [2024-07-15 02:14:09.808404] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.277 02:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.277 02:14:09 -- common/autotest_common.sh@10 -- # set +x 00:12:10.277 02:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.277 02:14:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.537 02:14:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.537 02:14:09 -- common/autotest_common.sh@1177 -- # local i=0 00:12:10.537 02:14:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.537 02:14:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:10.537 02:14:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:13.065 02:14:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:13.065 02:14:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:13.065 02:14:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:13.065 02:14:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.065 02:14:12 -- common/autotest_common.sh@1187 -- # return 0 00:12:13.065 02:14:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.065 02:14:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.065 02:14:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:13.065 02:14:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:13.065 02:14:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@1210 -- # return 0 00:12:13.065 02:14:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.065 02:14:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 [2024-07-15 02:14:12.120700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.065 02:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.065 02:14:12 -- common/autotest_common.sh@10 -- # set +x 00:12:13.065 02:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.065 02:14:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.065 02:14:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.065 02:14:12 -- common/autotest_common.sh@1177 -- # local i=0 00:12:13.065 02:14:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.065 02:14:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:13.065 02:14:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:14.964 02:14:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:14.964 02:14:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:14.964 02:14:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.964 02:14:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:14.964 02:14:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.964 02:14:14 -- common/autotest_common.sh@1187 -- # return 0 00:12:14.964 02:14:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.964 02:14:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.964 02:14:14 -- common/autotest_common.sh@1198 -- # local i=0 00:12:14.964 02:14:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:14.964 02:14:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.964 02:14:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:14.964 02:14:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.964 02:14:14 -- common/autotest_common.sh@1210 -- # return 0 00:12:14.964 02:14:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.964 02:14:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 [2024-07-15 02:14:14.433583] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.964 02:14:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:14.964 02:14:14 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:14:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:14.964 02:14:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.223 02:14:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.223 02:14:14 -- common/autotest_common.sh@1177 -- # local i=0 00:12:15.223 02:14:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.223 02:14:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:15.223 02:14:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:17.129 02:14:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:17.129 02:14:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:17.129 02:14:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.129 02:14:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:17.129 02:14:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.129 02:14:16 -- common/autotest_common.sh@1187 -- # return 0 00:12:17.129 02:14:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.387 02:14:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.387 02:14:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:17.387 02:14:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:17.387 02:14:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.387 02:14:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:17.387 02:14:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.387 02:14:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:17.387 02:14:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.387 02:14:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 [2024-07-15 02:14:16.758391] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.387 02:14:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:17.387 02:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:17.387 02:14:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:17.387 02:14:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.646 02:14:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.646 02:14:16 -- common/autotest_common.sh@1177 -- # local i=0 00:12:17.646 02:14:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.646 02:14:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:17.646 02:14:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:19.548 02:14:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:19.548 02:14:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:19.548 02:14:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.548 02:14:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:19.548 02:14:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.548 02:14:18 -- common/autotest_common.sh@1187 -- # return 0 00:12:19.548 02:14:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.548 02:14:19 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.548 02:14:19 -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.548 02:14:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:19.548 02:14:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.548 02:14:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:19.548 02:14:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.548 02:14:19 -- common/autotest_common.sh@1210 -- # return 0 00:12:19.548 02:14:19 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@99 -- # seq 1 5 00:12:19.548 02:14:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.548 02:14:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 [2024-07-15 02:14:19.067319] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.548 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.548 02:14:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.548 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.548 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.807 02:14:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 [2024-07-15 02:14:19.115400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.807 02:14:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 [2024-07-15 02:14:19.163481] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.807 02:14:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 [2024-07-15 02:14:19.215636] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.807 02:14:19 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.807 02:14:19 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.807 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.807 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.807 [2024-07-15 02:14:19.263681] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.807 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.808 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.808 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.808 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.808 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.808 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.808 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.808 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.808 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.808 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.808 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.808 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.808 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:19.808 02:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.808 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:19.808 02:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.808 02:14:19 -- target/rpc.sh@110 -- # stats='{ 00:12:19.808 "poll_groups": [ 00:12:19.808 { 00:12:19.808 "admin_qpairs": 2, 00:12:19.808 "completed_nvme_io": 66, 00:12:19.808 "current_admin_qpairs": 0, 00:12:19.808 "current_io_qpairs": 0, 00:12:19.808 "io_qpairs": 16, 00:12:19.808 "name": "nvmf_tgt_poll_group_0", 00:12:19.808 "pending_bdev_io": 0, 00:12:19.808 "transports": [ 00:12:19.808 { 00:12:19.808 "trtype": "TCP" 00:12:19.808 } 00:12:19.808 ] 00:12:19.808 }, 00:12:19.808 { 00:12:19.808 "admin_qpairs": 3, 00:12:19.808 "completed_nvme_io": 116, 00:12:19.808 "current_admin_qpairs": 0, 00:12:19.808 "current_io_qpairs": 0, 00:12:19.808 "io_qpairs": 17, 00:12:19.808 "name": "nvmf_tgt_poll_group_1", 00:12:19.808 "pending_bdev_io": 0, 00:12:19.808 "transports": [ 00:12:19.808 { 00:12:19.808 "trtype": "TCP" 00:12:19.808 } 00:12:19.808 ] 00:12:19.808 }, 00:12:19.808 { 00:12:19.808 "admin_qpairs": 1, 00:12:19.808 "completed_nvme_io": 169, 00:12:19.808 "current_admin_qpairs": 0, 00:12:19.808 "current_io_qpairs": 0, 00:12:19.808 "io_qpairs": 19, 00:12:19.808 "name": "nvmf_tgt_poll_group_2", 00:12:19.808 "pending_bdev_io": 0, 00:12:19.808 "transports": [ 00:12:19.808 { 00:12:19.808 "trtype": "TCP" 00:12:19.808 } 00:12:19.808 ] 00:12:19.808 }, 00:12:19.808 { 00:12:19.808 "admin_qpairs": 1, 00:12:19.808 "completed_nvme_io": 69, 00:12:19.808 "current_admin_qpairs": 0, 00:12:19.808 "current_io_qpairs": 0, 00:12:19.808 "io_qpairs": 18, 00:12:19.808 "name": "nvmf_tgt_poll_group_3", 00:12:19.808 "pending_bdev_io": 0, 00:12:19.808 "transports": [ 00:12:19.808 { 00:12:19.808 "trtype": "TCP" 00:12:19.808 } 00:12:19.808 ] 00:12:19.808 } 00:12:19.808 ], 00:12:19.808 "tick_rate": 2200000000 00:12:19.808 }' 00:12:19.808 02:14:19 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.808 02:14:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.808 02:14:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.808 02:14:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.066 02:14:19 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:20.066 02:14:19 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:20.066 02:14:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:20.066 02:14:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.066 02:14:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:20.066 02:14:19 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:20.066 02:14:19 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:20.066 02:14:19 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:20.066 02:14:19 -- target/rpc.sh@123 -- # nvmftestfini 00:12:20.066 02:14:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:20.066 02:14:19 -- nvmf/common.sh@116 -- # sync 00:12:20.066 02:14:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:20.066 02:14:19 -- nvmf/common.sh@119 -- # set +e 00:12:20.066 02:14:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:20.066 02:14:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:20.066 rmmod nvme_tcp 00:12:20.066 rmmod nvme_fabrics 00:12:20.066 rmmod nvme_keyring 00:12:20.066 02:14:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:20.066 02:14:19 -- nvmf/common.sh@123 -- # set -e 00:12:20.066 02:14:19 -- nvmf/common.sh@124 -- # return 0 00:12:20.067 02:14:19 -- nvmf/common.sh@477 -- # '[' -n 77267 ']' 00:12:20.067 02:14:19 -- nvmf/common.sh@478 -- # killprocess 77267 00:12:20.067 02:14:19 -- common/autotest_common.sh@926 -- # '[' -z 77267 ']' 00:12:20.067 02:14:19 -- common/autotest_common.sh@930 -- # kill -0 77267 00:12:20.067 02:14:19 -- common/autotest_common.sh@931 -- # uname 00:12:20.067 02:14:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.067 02:14:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77267 00:12:20.067 killing process with pid 77267 00:12:20.067 02:14:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.067 02:14:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.067 02:14:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77267' 00:12:20.067 02:14:19 -- common/autotest_common.sh@945 -- # kill 77267 00:12:20.067 02:14:19 -- common/autotest_common.sh@950 -- # wait 77267 00:12:20.325 02:14:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:20.325 02:14:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:20.325 02:14:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:20.325 02:14:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.325 02:14:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:20.325 02:14:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.325 02:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.325 02:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.325 02:14:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:20.325 00:12:20.325 real 0m18.966s 00:12:20.325 user 1m11.828s 00:12:20.325 sys 0m2.175s 00:12:20.325 02:14:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.325 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:20.325 ************************************ 00:12:20.325 END TEST nvmf_rpc 00:12:20.325 ************************************ 00:12:20.325 02:14:19 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:20.325 02:14:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:20.325 02:14:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:20.325 02:14:19 -- common/autotest_common.sh@10 -- # set +x 00:12:20.325 ************************************ 00:12:20.325 START TEST nvmf_invalid 00:12:20.325 ************************************ 00:12:20.325 02:14:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:20.598 * Looking for test storage... 00:12:20.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:20.598 02:14:19 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:20.598 02:14:19 -- nvmf/common.sh@7 -- # uname -s 00:12:20.598 02:14:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.598 02:14:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.598 02:14:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.598 02:14:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.598 02:14:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.598 02:14:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.598 02:14:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.598 02:14:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.598 02:14:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.598 02:14:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.598 02:14:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:20.598 02:14:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:20.598 02:14:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.598 02:14:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.598 02:14:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:20.598 02:14:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.598 02:14:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.598 02:14:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.598 02:14:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.599 02:14:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.599 02:14:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.599 02:14:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.599 02:14:19 -- paths/export.sh@5 -- # export PATH 00:12:20.599 02:14:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.599 02:14:19 -- nvmf/common.sh@46 -- # : 0 00:12:20.599 02:14:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:20.599 02:14:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:20.599 02:14:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:20.599 02:14:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.599 02:14:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.599 02:14:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:20.599 02:14:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:20.599 02:14:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:20.599 02:14:19 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.599 02:14:19 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:20.599 02:14:19 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:20.599 02:14:19 -- target/invalid.sh@14 -- # target=foobar 00:12:20.599 02:14:19 -- target/invalid.sh@16 -- # RANDOM=0 00:12:20.599 02:14:19 -- target/invalid.sh@34 -- # nvmftestinit 00:12:20.599 02:14:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:20.599 02:14:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.599 02:14:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:20.599 02:14:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:20.599 02:14:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:20.599 02:14:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.599 02:14:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.599 02:14:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.599 02:14:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:20.599 02:14:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:20.599 02:14:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:20.599 02:14:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:20.599 02:14:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:20.599 02:14:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:20.599 02:14:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.599 02:14:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.599 02:14:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:20.599 02:14:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:20.599 02:14:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:20.599 02:14:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:20.599 02:14:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:20.599 02:14:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.599 02:14:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:20.599 02:14:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:20.599 02:14:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:20.599 02:14:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:20.599 02:14:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:20.599 02:14:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:20.599 Cannot find device "nvmf_tgt_br" 00:12:20.599 02:14:20 -- nvmf/common.sh@154 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:20.599 Cannot find device "nvmf_tgt_br2" 00:12:20.599 02:14:20 -- nvmf/common.sh@155 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:20.599 02:14:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:20.599 Cannot find device "nvmf_tgt_br" 00:12:20.599 02:14:20 -- nvmf/common.sh@157 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:20.599 Cannot find device "nvmf_tgt_br2" 00:12:20.599 02:14:20 -- nvmf/common.sh@158 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:20.599 02:14:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:20.599 02:14:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:20.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.599 02:14:20 -- nvmf/common.sh@161 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:20.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:20.599 02:14:20 -- nvmf/common.sh@162 -- # true 00:12:20.599 02:14:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:20.599 02:14:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:20.599 02:14:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:20.599 02:14:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:20.599 02:14:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:20.871 02:14:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:20.871 02:14:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:20.871 02:14:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:20.871 02:14:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:20.871 02:14:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:20.871 02:14:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:20.871 02:14:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:20.871 02:14:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:20.871 02:14:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:20.871 02:14:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:20.871 02:14:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:20.871 02:14:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:20.871 02:14:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:20.871 02:14:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:20.871 02:14:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:20.871 02:14:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:20.871 02:14:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:20.871 02:14:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:20.871 02:14:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:20.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:12:20.871 00:12:20.871 --- 10.0.0.2 ping statistics --- 00:12:20.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.871 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:20.871 02:14:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:20.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:20.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:12:20.871 00:12:20.871 --- 10.0.0.3 ping statistics --- 00:12:20.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.871 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:20.871 02:14:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:20.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:20.871 00:12:20.871 --- 10.0.0.1 ping statistics --- 00:12:20.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.871 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:20.871 02:14:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.871 02:14:20 -- nvmf/common.sh@421 -- # return 0 00:12:20.871 02:14:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:20.871 02:14:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.871 02:14:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:20.871 02:14:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:20.871 02:14:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.871 02:14:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:20.871 02:14:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:20.871 02:14:20 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:20.871 02:14:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:20.872 02:14:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:20.872 02:14:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.872 02:14:20 -- nvmf/common.sh@469 -- # nvmfpid=77778 00:12:20.872 02:14:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.872 02:14:20 -- nvmf/common.sh@470 -- # waitforlisten 77778 00:12:20.872 02:14:20 -- common/autotest_common.sh@819 -- # '[' -z 77778 ']' 00:12:20.872 02:14:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.872 02:14:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:20.872 02:14:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.872 02:14:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:20.872 02:14:20 -- common/autotest_common.sh@10 -- # set +x 00:12:20.872 [2024-07-15 02:14:20.378052] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:20.872 [2024-07-15 02:14:20.378119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.130 [2024-07-15 02:14:20.515881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.130 [2024-07-15 02:14:20.604986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:21.130 [2024-07-15 02:14:20.605455] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.130 [2024-07-15 02:14:20.605638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.130 [2024-07-15 02:14:20.605808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.130 [2024-07-15 02:14:20.606032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.130 [2024-07-15 02:14:20.606453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.130 [2024-07-15 02:14:20.606647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.130 [2024-07-15 02:14:20.606651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.065 02:14:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:22.065 02:14:21 -- common/autotest_common.sh@852 -- # return 0 00:12:22.065 02:14:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:22.065 02:14:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:22.065 02:14:21 -- common/autotest_common.sh@10 -- # set +x 00:12:22.065 02:14:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.065 02:14:21 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:22.065 02:14:21 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21891 00:12:22.323 [2024-07-15 02:14:21.699009] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:22.323 02:14:21 -- target/invalid.sh@40 -- # out='2024/07/15 02:14:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21891 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:22.323 request: 00:12:22.323 { 00:12:22.323 "method": "nvmf_create_subsystem", 00:12:22.323 "params": { 00:12:22.323 "nqn": "nqn.2016-06.io.spdk:cnode21891", 00:12:22.323 "tgt_name": "foobar" 00:12:22.323 } 00:12:22.323 } 00:12:22.323 Got JSON-RPC error response 00:12:22.323 GoRPCClient: error on JSON-RPC call' 00:12:22.323 02:14:21 -- target/invalid.sh@41 -- # [[ 2024/07/15 02:14:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21891 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:22.323 request: 00:12:22.323 { 00:12:22.323 "method": "nvmf_create_subsystem", 00:12:22.323 "params": { 00:12:22.323 "nqn": "nqn.2016-06.io.spdk:cnode21891", 00:12:22.323 "tgt_name": "foobar" 00:12:22.323 } 00:12:22.323 } 00:12:22.323 Got JSON-RPC error response 00:12:22.323 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:22.323 02:14:21 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:22.323 02:14:21 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18635 00:12:22.582 [2024-07-15 02:14:21.987641] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18635: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:22.582 02:14:22 -- target/invalid.sh@45 -- # out='2024/07/15 02:14:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18635 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:22.582 request: 00:12:22.582 { 00:12:22.582 "method": "nvmf_create_subsystem", 00:12:22.582 "params": { 00:12:22.582 "nqn": "nqn.2016-06.io.spdk:cnode18635", 00:12:22.582 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:22.582 } 00:12:22.582 } 00:12:22.582 Got JSON-RPC error response 00:12:22.582 GoRPCClient: error on JSON-RPC call' 00:12:22.582 02:14:22 -- target/invalid.sh@46 -- # [[ 2024/07/15 02:14:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18635 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:22.582 request: 00:12:22.582 { 00:12:22.582 "method": "nvmf_create_subsystem", 00:12:22.582 "params": { 00:12:22.582 "nqn": "nqn.2016-06.io.spdk:cnode18635", 00:12:22.582 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:22.582 } 00:12:22.582 } 00:12:22.582 Got JSON-RPC error response 00:12:22.582 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:22.582 02:14:22 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:22.582 02:14:22 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18128 00:12:22.841 [2024-07-15 02:14:22.207934] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18128: invalid model number 'SPDK_Controller' 00:12:22.841 02:14:22 -- target/invalid.sh@50 -- # out='2024/07/15 02:14:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18128], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:22.841 request: 00:12:22.841 { 00:12:22.841 "method": "nvmf_create_subsystem", 00:12:22.841 "params": { 00:12:22.841 "nqn": "nqn.2016-06.io.spdk:cnode18128", 00:12:22.841 "model_number": "SPDK_Controller\u001f" 00:12:22.841 } 00:12:22.841 } 00:12:22.841 Got JSON-RPC error response 00:12:22.841 GoRPCClient: error on JSON-RPC call' 00:12:22.841 02:14:22 -- target/invalid.sh@51 -- # [[ 2024/07/15 02:14:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18128], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:22.841 request: 00:12:22.841 { 00:12:22.841 "method": "nvmf_create_subsystem", 00:12:22.841 "params": { 00:12:22.841 "nqn": "nqn.2016-06.io.spdk:cnode18128", 00:12:22.841 "model_number": "SPDK_Controller\u001f" 00:12:22.841 } 00:12:22.841 } 00:12:22.841 Got JSON-RPC error response 00:12:22.841 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:22.841 02:14:22 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:22.841 02:14:22 -- target/invalid.sh@19 -- # local length=21 ll 00:12:22.841 02:14:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:22.841 02:14:22 -- target/invalid.sh@21 -- # local chars 00:12:22.841 02:14:22 -- target/invalid.sh@22 -- # local string 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 100 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=d 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 91 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+='[' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 89 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=Y 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 57 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=9 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 61 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+== 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 117 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=u 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 59 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=';' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 103 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=g 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 107 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=k 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 41 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=')' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 116 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=t 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 122 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=z 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 107 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=k 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 104 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=h 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 116 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=t 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 117 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=u 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 58 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=: 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 32 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=' ' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 124 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+='|' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 126 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+='~' 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # printf %x 122 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:22.841 02:14:22 -- target/invalid.sh@25 -- # string+=z 00:12:22.841 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:22.842 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:22.842 02:14:22 -- target/invalid.sh@28 -- # [[ d == \- ]] 00:12:22.842 02:14:22 -- target/invalid.sh@31 -- # echo 'd[Y9=u;gk)tzkhtu: |~z' 00:12:22.842 02:14:22 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'd[Y9=u;gk)tzkhtu: |~z' nqn.2016-06.io.spdk:cnode26288 00:12:23.101 [2024-07-15 02:14:22.512458] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26288: invalid serial number 'd[Y9=u;gk)tzkhtu: |~z' 00:12:23.101 02:14:22 -- target/invalid.sh@54 -- # out='2024/07/15 02:14:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26288 serial_number:d[Y9=u;gk)tzkhtu: |~z], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN d[Y9=u;gk)tzkhtu: |~z 00:12:23.101 request: 00:12:23.101 { 00:12:23.101 "method": "nvmf_create_subsystem", 00:12:23.101 "params": { 00:12:23.101 "nqn": "nqn.2016-06.io.spdk:cnode26288", 00:12:23.101 "serial_number": "d[Y9=u;gk)tzkhtu: |~z" 00:12:23.101 } 00:12:23.101 } 00:12:23.101 Got JSON-RPC error response 00:12:23.101 GoRPCClient: error on JSON-RPC call' 00:12:23.101 02:14:22 -- target/invalid.sh@55 -- # [[ 2024/07/15 02:14:22 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26288 serial_number:d[Y9=u;gk)tzkhtu: |~z], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN d[Y9=u;gk)tzkhtu: |~z 00:12:23.101 request: 00:12:23.101 { 00:12:23.101 "method": "nvmf_create_subsystem", 00:12:23.101 "params": { 00:12:23.101 "nqn": "nqn.2016-06.io.spdk:cnode26288", 00:12:23.101 "serial_number": "d[Y9=u;gk)tzkhtu: |~z" 00:12:23.101 } 00:12:23.101 } 00:12:23.101 Got JSON-RPC error response 00:12:23.101 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:23.101 02:14:22 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:23.101 02:14:22 -- target/invalid.sh@19 -- # local length=41 ll 00:12:23.101 02:14:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:23.101 02:14:22 -- target/invalid.sh@21 -- # local chars 00:12:23.101 02:14:22 -- target/invalid.sh@22 -- # local string 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # printf %x 122 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # string+=z 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # printf %x 107 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # string+=k 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # printf %x 88 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # string+=X 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.101 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.101 02:14:22 -- target/invalid.sh@25 -- # printf %x 46 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=. 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 88 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=X 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 37 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=% 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 69 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=E 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 122 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=z 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 91 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+='[' 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 87 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=W 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 127 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 59 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=';' 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 84 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=T 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 58 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=: 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 91 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+='[' 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 97 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=a 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 112 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=p 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 72 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=H 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 66 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=B 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 75 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=K 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 83 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=S 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 104 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # string+=h 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.102 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # printf %x 105 00:12:23.102 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=i 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 43 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=+ 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 52 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=4 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 90 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=Z 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 85 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=U 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 78 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=N 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 88 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=X 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 49 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=1 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 116 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=t 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 126 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+='~' 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 106 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=j 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 66 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+=B 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 63 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # string+='?' 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.361 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.361 02:14:22 -- target/invalid.sh@25 -- # printf %x 97 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+=a 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # printf %x 68 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+=D 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # printf %x 123 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+='{' 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # printf %x 114 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+=r 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # printf %x 35 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+='#' 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # printf %x 78 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:23.362 02:14:22 -- target/invalid.sh@25 -- # string+=N 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:23.362 02:14:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:23.362 02:14:22 -- target/invalid.sh@28 -- # [[ z == \- ]] 00:12:23.362 02:14:22 -- target/invalid.sh@31 -- # echo 'zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N' 00:12:23.362 02:14:22 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N' nqn.2016-06.io.spdk:cnode28344 00:12:23.620 [2024-07-15 02:14:23.009310] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28344: invalid model number 'zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N' 00:12:23.620 02:14:23 -- target/invalid.sh@58 -- # out='2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N nqn:nqn.2016-06.io.spdk:cnode28344], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N 00:12:23.620 request: 00:12:23.620 { 00:12:23.620 "method": "nvmf_create_subsystem", 00:12:23.620 "params": { 00:12:23.620 "nqn": "nqn.2016-06.io.spdk:cnode28344", 00:12:23.620 "model_number": "zkX.X%Ez[W\u007f;T:[apHBKShi+4ZUNX1t~jB?aD{r#N" 00:12:23.620 } 00:12:23.620 } 00:12:23.620 Got JSON-RPC error response 00:12:23.620 GoRPCClient: error on JSON-RPC call' 00:12:23.620 02:14:23 -- target/invalid.sh@59 -- # [[ 2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N nqn:nqn.2016-06.io.spdk:cnode28344], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN zkX.X%Ez[W;T:[apHBKShi+4ZUNX1t~jB?aD{r#N 00:12:23.620 request: 00:12:23.620 { 00:12:23.620 "method": "nvmf_create_subsystem", 00:12:23.620 "params": { 00:12:23.620 "nqn": "nqn.2016-06.io.spdk:cnode28344", 00:12:23.620 "model_number": "zkX.X%Ez[W\u007f;T:[apHBKShi+4ZUNX1t~jB?aD{r#N" 00:12:23.620 } 00:12:23.620 } 00:12:23.620 Got JSON-RPC error response 00:12:23.620 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:23.621 02:14:23 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:23.878 [2024-07-15 02:14:23.229706] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.878 02:14:23 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:24.136 02:14:23 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:24.136 02:14:23 -- target/invalid.sh@67 -- # echo '' 00:12:24.136 02:14:23 -- target/invalid.sh@67 -- # head -n 1 00:12:24.136 02:14:23 -- target/invalid.sh@67 -- # IP= 00:12:24.136 02:14:23 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:24.394 [2024-07-15 02:14:23.755347] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:24.394 02:14:23 -- target/invalid.sh@69 -- # out='2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:24.394 request: 00:12:24.394 { 00:12:24.394 "method": "nvmf_subsystem_remove_listener", 00:12:24.394 "params": { 00:12:24.394 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:24.394 "listen_address": { 00:12:24.394 "trtype": "tcp", 00:12:24.394 "traddr": "", 00:12:24.394 "trsvcid": "4421" 00:12:24.394 } 00:12:24.394 } 00:12:24.394 } 00:12:24.394 Got JSON-RPC error response 00:12:24.394 GoRPCClient: error on JSON-RPC call' 00:12:24.394 02:14:23 -- target/invalid.sh@70 -- # [[ 2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:24.394 request: 00:12:24.394 { 00:12:24.394 "method": "nvmf_subsystem_remove_listener", 00:12:24.394 "params": { 00:12:24.394 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:24.394 "listen_address": { 00:12:24.394 "trtype": "tcp", 00:12:24.394 "traddr": "", 00:12:24.394 "trsvcid": "4421" 00:12:24.394 } 00:12:24.394 } 00:12:24.394 } 00:12:24.394 Got JSON-RPC error response 00:12:24.394 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:24.394 02:14:23 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28539 -i 0 00:12:24.653 [2024-07-15 02:14:23.984834] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28539: invalid cntlid range [0-65519] 00:12:24.653 02:14:24 -- target/invalid.sh@73 -- # out='2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28539], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:24.653 request: 00:12:24.653 { 00:12:24.653 "method": "nvmf_create_subsystem", 00:12:24.653 "params": { 00:12:24.653 "nqn": "nqn.2016-06.io.spdk:cnode28539", 00:12:24.653 "min_cntlid": 0 00:12:24.653 } 00:12:24.653 } 00:12:24.653 Got JSON-RPC error response 00:12:24.653 GoRPCClient: error on JSON-RPC call' 00:12:24.653 02:14:24 -- target/invalid.sh@74 -- # [[ 2024/07/15 02:14:23 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode28539], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:24.653 request: 00:12:24.653 { 00:12:24.653 "method": "nvmf_create_subsystem", 00:12:24.653 "params": { 00:12:24.653 "nqn": "nqn.2016-06.io.spdk:cnode28539", 00:12:24.653 "min_cntlid": 0 00:12:24.653 } 00:12:24.653 } 00:12:24.653 Got JSON-RPC error response 00:12:24.653 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:24.653 02:14:24 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24220 -i 65520 00:12:24.912 [2024-07-15 02:14:24.269282] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24220: invalid cntlid range [65520-65519] 00:12:24.912 02:14:24 -- target/invalid.sh@75 -- # out='2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24220], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:24.912 request: 00:12:24.912 { 00:12:24.912 "method": "nvmf_create_subsystem", 00:12:24.912 "params": { 00:12:24.912 "nqn": "nqn.2016-06.io.spdk:cnode24220", 00:12:24.912 "min_cntlid": 65520 00:12:24.912 } 00:12:24.912 } 00:12:24.912 Got JSON-RPC error response 00:12:24.912 GoRPCClient: error on JSON-RPC call' 00:12:24.912 02:14:24 -- target/invalid.sh@76 -- # [[ 2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24220], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:24.912 request: 00:12:24.912 { 00:12:24.912 "method": "nvmf_create_subsystem", 00:12:24.912 "params": { 00:12:24.912 "nqn": "nqn.2016-06.io.spdk:cnode24220", 00:12:24.912 "min_cntlid": 65520 00:12:24.912 } 00:12:24.912 } 00:12:24.912 Got JSON-RPC error response 00:12:24.912 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:24.912 02:14:24 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11798 -I 0 00:12:25.170 [2024-07-15 02:14:24.489588] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11798: invalid cntlid range [1-0] 00:12:25.170 02:14:24 -- target/invalid.sh@77 -- # out='2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11798], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:25.170 request: 00:12:25.170 { 00:12:25.170 "method": "nvmf_create_subsystem", 00:12:25.170 "params": { 00:12:25.170 "nqn": "nqn.2016-06.io.spdk:cnode11798", 00:12:25.170 "max_cntlid": 0 00:12:25.170 } 00:12:25.170 } 00:12:25.170 Got JSON-RPC error response 00:12:25.170 GoRPCClient: error on JSON-RPC call' 00:12:25.170 02:14:24 -- target/invalid.sh@78 -- # [[ 2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode11798], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:25.170 request: 00:12:25.170 { 00:12:25.170 "method": "nvmf_create_subsystem", 00:12:25.170 "params": { 00:12:25.170 "nqn": "nqn.2016-06.io.spdk:cnode11798", 00:12:25.170 "max_cntlid": 0 00:12:25.170 } 00:12:25.170 } 00:12:25.170 Got JSON-RPC error response 00:12:25.170 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.170 02:14:24 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6764 -I 65520 00:12:25.429 [2024-07-15 02:14:24.770079] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6764: invalid cntlid range [1-65520] 00:12:25.429 02:14:24 -- target/invalid.sh@79 -- # out='2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6764], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:25.429 request: 00:12:25.429 { 00:12:25.429 "method": "nvmf_create_subsystem", 00:12:25.429 "params": { 00:12:25.429 "nqn": "nqn.2016-06.io.spdk:cnode6764", 00:12:25.429 "max_cntlid": 65520 00:12:25.429 } 00:12:25.429 } 00:12:25.429 Got JSON-RPC error response 00:12:25.429 GoRPCClient: error on JSON-RPC call' 00:12:25.429 02:14:24 -- target/invalid.sh@80 -- # [[ 2024/07/15 02:14:24 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode6764], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:25.429 request: 00:12:25.429 { 00:12:25.429 "method": "nvmf_create_subsystem", 00:12:25.429 "params": { 00:12:25.429 "nqn": "nqn.2016-06.io.spdk:cnode6764", 00:12:25.429 "max_cntlid": 65520 00:12:25.429 } 00:12:25.429 } 00:12:25.429 Got JSON-RPC error response 00:12:25.429 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.429 02:14:24 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28238 -i 6 -I 5 00:12:25.688 [2024-07-15 02:14:25.070498] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28238: invalid cntlid range [6-5] 00:12:25.688 02:14:25 -- target/invalid.sh@83 -- # out='2024/07/15 02:14:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28238], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:25.688 request: 00:12:25.688 { 00:12:25.688 "method": "nvmf_create_subsystem", 00:12:25.688 "params": { 00:12:25.688 "nqn": "nqn.2016-06.io.spdk:cnode28238", 00:12:25.688 "min_cntlid": 6, 00:12:25.688 "max_cntlid": 5 00:12:25.688 } 00:12:25.688 } 00:12:25.688 Got JSON-RPC error response 00:12:25.688 GoRPCClient: error on JSON-RPC call' 00:12:25.688 02:14:25 -- target/invalid.sh@84 -- # [[ 2024/07/15 02:14:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28238], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:25.688 request: 00:12:25.688 { 00:12:25.688 "method": "nvmf_create_subsystem", 00:12:25.688 "params": { 00:12:25.688 "nqn": "nqn.2016-06.io.spdk:cnode28238", 00:12:25.688 "min_cntlid": 6, 00:12:25.688 "max_cntlid": 5 00:12:25.688 } 00:12:25.688 } 00:12:25.688 Got JSON-RPC error response 00:12:25.688 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:25.688 02:14:25 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:25.688 02:14:25 -- target/invalid.sh@87 -- # out='request: 00:12:25.688 { 00:12:25.688 "name": "foobar", 00:12:25.688 "method": "nvmf_delete_target", 00:12:25.688 "req_id": 1 00:12:25.688 } 00:12:25.688 Got JSON-RPC error response 00:12:25.688 response: 00:12:25.688 { 00:12:25.688 "code": -32602, 00:12:25.688 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:25.688 }' 00:12:25.688 02:14:25 -- target/invalid.sh@88 -- # [[ request: 00:12:25.688 { 00:12:25.688 "name": "foobar", 00:12:25.688 "method": "nvmf_delete_target", 00:12:25.688 "req_id": 1 00:12:25.688 } 00:12:25.688 Got JSON-RPC error response 00:12:25.688 response: 00:12:25.688 { 00:12:25.688 "code": -32602, 00:12:25.688 "message": "The specified target doesn't exist, cannot delete it." 00:12:25.688 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:25.688 02:14:25 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:25.688 02:14:25 -- target/invalid.sh@91 -- # nvmftestfini 00:12:25.688 02:14:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:25.688 02:14:25 -- nvmf/common.sh@116 -- # sync 00:12:25.688 02:14:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:25.688 02:14:25 -- nvmf/common.sh@119 -- # set +e 00:12:25.688 02:14:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:25.688 02:14:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:25.947 rmmod nvme_tcp 00:12:25.947 rmmod nvme_fabrics 00:12:25.947 rmmod nvme_keyring 00:12:25.947 02:14:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:25.947 02:14:25 -- nvmf/common.sh@123 -- # set -e 00:12:25.947 02:14:25 -- nvmf/common.sh@124 -- # return 0 00:12:25.947 02:14:25 -- nvmf/common.sh@477 -- # '[' -n 77778 ']' 00:12:25.947 02:14:25 -- nvmf/common.sh@478 -- # killprocess 77778 00:12:25.947 02:14:25 -- common/autotest_common.sh@926 -- # '[' -z 77778 ']' 00:12:25.947 02:14:25 -- common/autotest_common.sh@930 -- # kill -0 77778 00:12:25.947 02:14:25 -- common/autotest_common.sh@931 -- # uname 00:12:25.947 02:14:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:25.947 02:14:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77778 00:12:25.947 killing process with pid 77778 00:12:25.947 02:14:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:25.947 02:14:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:25.947 02:14:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77778' 00:12:25.947 02:14:25 -- common/autotest_common.sh@945 -- # kill 77778 00:12:25.947 02:14:25 -- common/autotest_common.sh@950 -- # wait 77778 00:12:26.206 02:14:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:26.206 02:14:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:26.206 02:14:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:26.206 02:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.206 02:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.206 02:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.206 02:14:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:26.206 ************************************ 00:12:26.206 END TEST nvmf_invalid 00:12:26.206 ************************************ 00:12:26.206 00:12:26.206 real 0m5.671s 00:12:26.206 user 0m22.709s 00:12:26.206 sys 0m1.280s 00:12:26.206 02:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.206 02:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:26.206 02:14:25 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:26.206 02:14:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:26.206 02:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.206 02:14:25 -- common/autotest_common.sh@10 -- # set +x 00:12:26.206 ************************************ 00:12:26.206 START TEST nvmf_abort 00:12:26.206 ************************************ 00:12:26.206 02:14:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:26.206 * Looking for test storage... 00:12:26.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:26.206 02:14:25 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:26.206 02:14:25 -- nvmf/common.sh@7 -- # uname -s 00:12:26.206 02:14:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.206 02:14:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.206 02:14:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.206 02:14:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.206 02:14:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.206 02:14:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.206 02:14:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.206 02:14:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.206 02:14:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.206 02:14:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:26.206 02:14:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:26.206 02:14:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.206 02:14:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.206 02:14:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:26.206 02:14:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:26.206 02:14:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.206 02:14:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.206 02:14:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.206 02:14:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.206 02:14:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.206 02:14:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.206 02:14:25 -- paths/export.sh@5 -- # export PATH 00:12:26.206 02:14:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.206 02:14:25 -- nvmf/common.sh@46 -- # : 0 00:12:26.206 02:14:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:26.206 02:14:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:26.206 02:14:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:26.206 02:14:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.206 02:14:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.206 02:14:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:26.206 02:14:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:26.206 02:14:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:26.206 02:14:25 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:26.206 02:14:25 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:26.206 02:14:25 -- target/abort.sh@14 -- # nvmftestinit 00:12:26.206 02:14:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:26.206 02:14:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.206 02:14:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:26.206 02:14:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:26.206 02:14:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:26.206 02:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.206 02:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.206 02:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.206 02:14:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:26.206 02:14:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:26.206 02:14:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.206 02:14:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.206 02:14:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:26.206 02:14:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:26.206 02:14:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:26.206 02:14:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:26.206 02:14:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:26.206 02:14:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.206 02:14:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:26.206 02:14:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:26.206 02:14:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:26.206 02:14:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:26.206 02:14:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:26.206 02:14:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:26.206 Cannot find device "nvmf_tgt_br" 00:12:26.206 02:14:25 -- nvmf/common.sh@154 -- # true 00:12:26.206 02:14:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:26.206 Cannot find device "nvmf_tgt_br2" 00:12:26.206 02:14:25 -- nvmf/common.sh@155 -- # true 00:12:26.206 02:14:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:26.206 02:14:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:26.206 Cannot find device "nvmf_tgt_br" 00:12:26.206 02:14:25 -- nvmf/common.sh@157 -- # true 00:12:26.206 02:14:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:26.206 Cannot find device "nvmf_tgt_br2" 00:12:26.206 02:14:25 -- nvmf/common.sh@158 -- # true 00:12:26.206 02:14:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:26.464 02:14:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:26.464 02:14:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:26.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.464 02:14:25 -- nvmf/common.sh@161 -- # true 00:12:26.464 02:14:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:26.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:26.464 02:14:25 -- nvmf/common.sh@162 -- # true 00:12:26.464 02:14:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:26.464 02:14:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:26.464 02:14:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:26.464 02:14:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:26.464 02:14:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:26.464 02:14:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:26.464 02:14:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:26.464 02:14:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:26.464 02:14:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:26.464 02:14:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:26.464 02:14:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:26.464 02:14:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:26.464 02:14:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:26.464 02:14:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:26.464 02:14:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:26.464 02:14:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:26.464 02:14:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:26.464 02:14:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:26.464 02:14:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:26.464 02:14:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:26.464 02:14:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:26.464 02:14:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:26.465 02:14:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:26.465 02:14:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:26.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:26.465 00:12:26.465 --- 10.0.0.2 ping statistics --- 00:12:26.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.465 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:26.465 02:14:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:26.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:26.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:12:26.465 00:12:26.465 --- 10.0.0.3 ping statistics --- 00:12:26.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.465 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:26.465 02:14:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:26.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:12:26.465 00:12:26.465 --- 10.0.0.1 ping statistics --- 00:12:26.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.465 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:26.465 02:14:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.465 02:14:25 -- nvmf/common.sh@421 -- # return 0 00:12:26.465 02:14:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:26.465 02:14:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.465 02:14:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:26.465 02:14:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:26.465 02:14:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.465 02:14:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:26.465 02:14:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:26.465 02:14:26 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:26.465 02:14:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:26.465 02:14:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:26.465 02:14:26 -- common/autotest_common.sh@10 -- # set +x 00:12:26.465 02:14:26 -- nvmf/common.sh@469 -- # nvmfpid=78275 00:12:26.465 02:14:26 -- nvmf/common.sh@470 -- # waitforlisten 78275 00:12:26.465 02:14:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.465 02:14:26 -- common/autotest_common.sh@819 -- # '[' -z 78275 ']' 00:12:26.465 02:14:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.465 02:14:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.465 02:14:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.465 02:14:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.465 02:14:26 -- common/autotest_common.sh@10 -- # set +x 00:12:26.723 [2024-07-15 02:14:26.059391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:26.723 [2024-07-15 02:14:26.059498] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.723 [2024-07-15 02:14:26.190798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.723 [2024-07-15 02:14:26.275676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.723 [2024-07-15 02:14:26.276116] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.723 [2024-07-15 02:14:26.276240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.723 [2024-07-15 02:14:26.276446] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.723 [2024-07-15 02:14:26.276865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.723 [2024-07-15 02:14:26.276766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.723 [2024-07-15 02:14:26.276857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.658 02:14:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.658 02:14:27 -- common/autotest_common.sh@852 -- # return 0 00:12:27.658 02:14:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:27.658 02:14:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 02:14:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.658 02:14:27 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 [2024-07-15 02:14:27.058805] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 Malloc0 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 Delay0 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 [2024-07-15 02:14:27.136723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.658 02:14:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.658 02:14:27 -- common/autotest_common.sh@10 -- # set +x 00:12:27.658 02:14:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.658 02:14:27 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:27.916 [2024-07-15 02:14:27.316750] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:29.818 Initializing NVMe Controllers 00:12:29.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:29.818 controller IO queue size 128 less than required 00:12:29.818 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:29.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:29.818 Initialization complete. Launching workers. 00:12:29.818 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33408 00:12:29.818 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33473, failed to submit 62 00:12:29.818 success 33408, unsuccess 65, failed 0 00:12:29.818 02:14:29 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:29.818 02:14:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.818 02:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:29.818 02:14:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.818 02:14:29 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:29.818 02:14:29 -- target/abort.sh@38 -- # nvmftestfini 00:12:29.818 02:14:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:29.818 02:14:29 -- nvmf/common.sh@116 -- # sync 00:12:30.077 02:14:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.077 02:14:29 -- nvmf/common.sh@119 -- # set +e 00:12:30.077 02:14:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.077 02:14:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.077 rmmod nvme_tcp 00:12:30.077 rmmod nvme_fabrics 00:12:30.077 rmmod nvme_keyring 00:12:30.077 02:14:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.077 02:14:29 -- nvmf/common.sh@123 -- # set -e 00:12:30.077 02:14:29 -- nvmf/common.sh@124 -- # return 0 00:12:30.077 02:14:29 -- nvmf/common.sh@477 -- # '[' -n 78275 ']' 00:12:30.077 02:14:29 -- nvmf/common.sh@478 -- # killprocess 78275 00:12:30.077 02:14:29 -- common/autotest_common.sh@926 -- # '[' -z 78275 ']' 00:12:30.077 02:14:29 -- common/autotest_common.sh@930 -- # kill -0 78275 00:12:30.077 02:14:29 -- common/autotest_common.sh@931 -- # uname 00:12:30.077 02:14:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:30.077 02:14:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78275 00:12:30.077 02:14:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:30.077 02:14:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:30.077 killing process with pid 78275 00:12:30.077 02:14:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78275' 00:12:30.077 02:14:29 -- common/autotest_common.sh@945 -- # kill 78275 00:12:30.077 02:14:29 -- common/autotest_common.sh@950 -- # wait 78275 00:12:30.335 02:14:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.335 02:14:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.335 02:14:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.335 02:14:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.335 02:14:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.335 02:14:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.335 02:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.335 02:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.335 02:14:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:30.335 00:12:30.335 real 0m4.215s 00:12:30.335 user 0m12.363s 00:12:30.335 sys 0m0.963s 00:12:30.335 02:14:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.335 02:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.335 ************************************ 00:12:30.335 END TEST nvmf_abort 00:12:30.335 ************************************ 00:12:30.335 02:14:29 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:30.335 02:14:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:30.335 02:14:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:30.335 02:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:30.335 ************************************ 00:12:30.335 START TEST nvmf_ns_hotplug_stress 00:12:30.335 ************************************ 00:12:30.335 02:14:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:30.594 * Looking for test storage... 00:12:30.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.594 02:14:29 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.594 02:14:29 -- nvmf/common.sh@7 -- # uname -s 00:12:30.594 02:14:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.594 02:14:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.594 02:14:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.594 02:14:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.594 02:14:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.594 02:14:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.594 02:14:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.594 02:14:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.594 02:14:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.594 02:14:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.594 02:14:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:30.594 02:14:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:12:30.594 02:14:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.594 02:14:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.594 02:14:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.594 02:14:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.594 02:14:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.594 02:14:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.594 02:14:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.594 02:14:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.594 02:14:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.594 02:14:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.594 02:14:29 -- paths/export.sh@5 -- # export PATH 00:12:30.594 02:14:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.594 02:14:29 -- nvmf/common.sh@46 -- # : 0 00:12:30.594 02:14:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.594 02:14:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.594 02:14:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.594 02:14:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.594 02:14:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.594 02:14:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.594 02:14:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.594 02:14:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.594 02:14:29 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:30.594 02:14:29 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:30.594 02:14:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:30.594 02:14:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.594 02:14:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:30.594 02:14:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:30.595 02:14:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:30.595 02:14:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.595 02:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.595 02:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.595 02:14:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:30.595 02:14:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:30.595 02:14:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:30.595 02:14:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:30.595 02:14:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:30.595 02:14:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:30.595 02:14:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.595 02:14:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.595 02:14:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:30.595 02:14:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:30.595 02:14:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.595 02:14:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.595 02:14:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.595 02:14:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.595 02:14:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.595 02:14:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.595 02:14:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.595 02:14:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.595 02:14:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:30.595 02:14:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:30.595 Cannot find device "nvmf_tgt_br" 00:12:30.595 02:14:29 -- nvmf/common.sh@154 -- # true 00:12:30.595 02:14:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.595 Cannot find device "nvmf_tgt_br2" 00:12:30.595 02:14:29 -- nvmf/common.sh@155 -- # true 00:12:30.595 02:14:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:30.595 02:14:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:30.595 Cannot find device "nvmf_tgt_br" 00:12:30.595 02:14:29 -- nvmf/common.sh@157 -- # true 00:12:30.595 02:14:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:30.595 Cannot find device "nvmf_tgt_br2" 00:12:30.595 02:14:30 -- nvmf/common.sh@158 -- # true 00:12:30.595 02:14:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:30.595 02:14:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:30.595 02:14:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.595 02:14:30 -- nvmf/common.sh@161 -- # true 00:12:30.595 02:14:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.595 02:14:30 -- nvmf/common.sh@162 -- # true 00:12:30.595 02:14:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.595 02:14:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.595 02:14:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.595 02:14:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.595 02:14:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.595 02:14:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.595 02:14:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.595 02:14:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.853 02:14:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.853 02:14:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:30.853 02:14:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:30.853 02:14:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:30.853 02:14:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:30.853 02:14:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.853 02:14:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.853 02:14:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.853 02:14:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:30.853 02:14:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:30.853 02:14:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.853 02:14:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.853 02:14:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.854 02:14:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.854 02:14:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.854 02:14:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:30.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:12:30.854 00:12:30.854 --- 10.0.0.2 ping statistics --- 00:12:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.854 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:30.854 02:14:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:30.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:12:30.854 00:12:30.854 --- 10.0.0.3 ping statistics --- 00:12:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.854 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:30.854 02:14:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:12:30.854 00:12:30.854 --- 10.0.0.1 ping statistics --- 00:12:30.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.854 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:30.854 02:14:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.854 02:14:30 -- nvmf/common.sh@421 -- # return 0 00:12:30.854 02:14:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:30.854 02:14:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.854 02:14:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:30.854 02:14:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:30.854 02:14:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.854 02:14:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:30.854 02:14:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:30.854 02:14:30 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:30.854 02:14:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:30.854 02:14:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:30.854 02:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 02:14:30 -- nvmf/common.sh@469 -- # nvmfpid=78540 00:12:30.854 02:14:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:30.854 02:14:30 -- nvmf/common.sh@470 -- # waitforlisten 78540 00:12:30.854 02:14:30 -- common/autotest_common.sh@819 -- # '[' -z 78540 ']' 00:12:30.854 02:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.854 02:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:30.854 02:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.854 02:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:30.854 02:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:30.854 [2024-07-15 02:14:30.360982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:30.854 [2024-07-15 02:14:30.361662] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.113 [2024-07-15 02:14:30.503058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.113 [2024-07-15 02:14:30.587493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:31.113 [2024-07-15 02:14:30.587731] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.113 [2024-07-15 02:14:30.587750] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.113 [2024-07-15 02:14:30.587761] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.113 [2024-07-15 02:14:30.587937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.113 [2024-07-15 02:14:30.588587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.113 [2024-07-15 02:14:30.588652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.048 02:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:32.048 02:14:31 -- common/autotest_common.sh@852 -- # return 0 00:12:32.048 02:14:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:32.048 02:14:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:32.048 02:14:31 -- common/autotest_common.sh@10 -- # set +x 00:12:32.048 02:14:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.048 02:14:31 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:32.048 02:14:31 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.307 [2024-07-15 02:14:31.664689] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.307 02:14:31 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:32.565 02:14:31 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.823 [2024-07-15 02:14:32.161737] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.823 02:14:32 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.093 02:14:32 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:33.376 Malloc0 00:12:33.376 02:14:32 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:33.376 Delay0 00:12:33.376 02:14:32 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.635 02:14:33 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:33.892 NULL1 00:12:33.892 02:14:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:34.150 02:14:33 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:34.150 02:14:33 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=78671 00:12:34.150 02:14:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:34.150 02:14:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.524 Read completed with error (sct=0, sc=11) 00:12:35.524 02:14:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:35.524 02:14:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:35.524 02:14:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:35.782 true 00:12:35.782 02:14:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:35.782 02:14:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.718 02:14:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.718 02:14:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:36.718 02:14:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:36.977 true 00:12:36.977 02:14:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:36.977 02:14:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.236 02:14:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.495 02:14:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:37.495 02:14:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:37.754 true 00:12:37.754 02:14:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:37.754 02:14:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.691 02:14:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.950 02:14:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:38.950 02:14:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:38.950 true 00:12:39.208 02:14:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:39.208 02:14:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.208 02:14:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.467 02:14:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:39.467 02:14:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:39.726 true 00:12:39.726 02:14:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:39.726 02:14:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.662 02:14:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.921 02:14:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:40.921 02:14:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:40.921 true 00:12:40.921 02:14:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:40.921 02:14:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.180 02:14:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.439 02:14:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:41.439 02:14:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:41.697 true 00:12:41.697 02:14:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:41.697 02:14:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.633 02:14:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.892 02:14:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:42.892 02:14:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:43.151 true 00:12:43.151 02:14:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:43.151 02:14:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.409 02:14:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.667 02:14:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:43.667 02:14:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:43.667 true 00:12:43.667 02:14:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:43.667 02:14:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.646 02:14:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.904 02:14:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:44.904 02:14:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:45.162 true 00:12:45.162 02:14:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:45.162 02:14:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.419 02:14:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.676 02:14:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:45.676 02:14:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:45.934 true 00:12:45.934 02:14:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:45.934 02:14:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.867 02:14:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.867 02:14:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:46.867 02:14:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:47.125 true 00:12:47.125 02:14:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:47.125 02:14:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.383 02:14:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.641 02:14:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:47.641 02:14:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:47.641 true 00:12:47.641 02:14:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:47.641 02:14:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.574 02:14:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.833 02:14:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:48.833 02:14:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:49.091 true 00:12:49.091 02:14:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:49.091 02:14:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.349 02:14:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.608 02:14:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:49.608 02:14:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:49.866 true 00:12:49.866 02:14:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:49.866 02:14:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.800 02:14:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.800 02:14:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:50.800 02:14:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:51.058 true 00:12:51.058 02:14:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:51.058 02:14:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.317 02:14:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.584 02:14:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:51.584 02:14:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:51.857 true 00:12:51.857 02:14:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:51.857 02:14:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.791 02:14:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.049 02:14:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:53.049 02:14:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:53.049 true 00:12:53.308 02:14:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:53.308 02:14:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.308 02:14:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.566 02:14:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:53.566 02:14:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:53.862 true 00:12:53.862 02:14:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:53.862 02:14:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.797 02:14:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:54.797 02:14:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:54.797 02:14:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:55.056 true 00:12:55.056 02:14:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:55.056 02:14:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.315 02:14:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.573 02:14:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:55.573 02:14:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:55.832 true 00:12:55.832 02:14:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:55.832 02:14:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.768 02:14:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.054 02:14:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:57.054 02:14:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:57.054 true 00:12:57.054 02:14:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:57.054 02:14:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.312 02:14:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.571 02:14:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:57.571 02:14:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:57.830 true 00:12:57.830 02:14:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:57.830 02:14:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.767 02:14:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.026 02:14:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:59.026 02:14:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:59.285 true 00:12:59.285 02:14:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:59.285 02:14:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.285 02:14:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.544 02:14:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:59.544 02:14:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:59.806 true 00:12:59.806 02:14:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:12:59.806 02:14:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.742 02:15:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.000 02:15:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:01.000 02:15:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:01.259 true 00:13:01.259 02:15:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:13:01.259 02:15:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.517 02:15:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.776 02:15:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:01.776 02:15:01 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:02.035 true 00:13:02.035 02:15:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:13:02.035 02:15:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.293 02:15:01 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.551 02:15:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:02.551 02:15:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:02.808 true 00:13:02.808 02:15:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:13:02.808 02:15:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.757 02:15:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.015 02:15:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:04.015 02:15:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:04.272 true 00:13:04.272 02:15:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:13:04.272 02:15:03 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.272 Initializing NVMe Controllers 00:13:04.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.272 Controller IO queue size 128, less than required. 00:13:04.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.272 Controller IO queue size 128, less than required. 00:13:04.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:04.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:04.272 Initialization complete. Launching workers. 00:13:04.272 ======================================================== 00:13:04.272 Latency(us) 00:13:04.273 Device Information : IOPS MiB/s Average min max 00:13:04.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 359.37 0.18 192814.82 3028.67 1160591.61 00:13:04.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12766.24 6.23 10027.39 1742.46 560933.99 00:13:04.273 ======================================================== 00:13:04.273 Total : 13125.60 6.41 15031.96 1742.46 1160591.61 00:13:04.273 00:13:04.530 02:15:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.786 02:15:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:04.786 02:15:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:04.786 true 00:13:04.786 02:15:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 78671 00:13:04.786 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (78671) - No such process 00:13:04.786 02:15:04 -- target/ns_hotplug_stress.sh@53 -- # wait 78671 00:13:04.786 02:15:04 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.043 02:15:04 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.300 02:15:04 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:05.300 02:15:04 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:05.300 02:15:04 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:05.300 02:15:04 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.300 02:15:04 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:05.558 null0 00:13:05.558 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.558 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.558 02:15:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:05.815 null1 00:13:05.815 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:05.815 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:05.815 02:15:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:06.072 null2 00:13:06.072 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.072 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.072 02:15:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:06.341 null3 00:13:06.341 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.341 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.341 02:15:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:06.599 null4 00:13:06.599 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.599 02:15:05 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.599 02:15:05 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:06.599 null5 00:13:06.599 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.599 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.599 02:15:06 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:06.856 null6 00:13:06.856 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:06.856 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:06.856 02:15:06 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:07.114 null7 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@66 -- # wait 79714 79715 79718 79720 79721 79722 79724 79728 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.114 02:15:06 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.372 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:07.629 02:15:06 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.629 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.630 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.887 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.145 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.402 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:08.659 02:15:07 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.659 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:08.916 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.174 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:09.431 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.689 02:15:08 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.689 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:09.690 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.690 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.690 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.948 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.206 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.464 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.465 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.465 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.465 02:15:09 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.465 02:15:09 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.465 02:15:09 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.465 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.723 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:10.983 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.242 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:11.500 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.500 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.500 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.500 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.500 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:11.501 02:15:10 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:11.501 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:11.758 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:12.016 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.274 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:12.533 02:15:11 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:12.533 02:15:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:12.533 02:15:11 -- nvmf/common.sh@116 -- # sync 00:13:12.533 02:15:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:12.533 02:15:11 -- nvmf/common.sh@119 -- # set +e 00:13:12.533 02:15:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:12.533 02:15:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:12.533 rmmod nvme_tcp 00:13:12.533 rmmod nvme_fabrics 00:13:12.533 rmmod nvme_keyring 00:13:12.533 02:15:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:12.533 02:15:12 -- nvmf/common.sh@123 -- # set -e 00:13:12.533 02:15:12 -- nvmf/common.sh@124 -- # return 0 00:13:12.533 02:15:12 -- nvmf/common.sh@477 -- # '[' -n 78540 ']' 00:13:12.533 02:15:12 -- nvmf/common.sh@478 -- # killprocess 78540 00:13:12.533 02:15:12 -- common/autotest_common.sh@926 -- # '[' -z 78540 ']' 00:13:12.533 02:15:12 -- common/autotest_common.sh@930 -- # kill -0 78540 00:13:12.533 02:15:12 -- common/autotest_common.sh@931 -- # uname 00:13:12.533 02:15:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:12.533 02:15:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78540 00:13:12.533 02:15:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:12.533 02:15:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:12.533 killing process with pid 78540 00:13:12.533 02:15:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78540' 00:13:12.533 02:15:12 -- common/autotest_common.sh@945 -- # kill 78540 00:13:12.533 02:15:12 -- common/autotest_common.sh@950 -- # wait 78540 00:13:12.791 02:15:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:12.791 02:15:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:12.791 02:15:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:12.791 02:15:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.791 02:15:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:12.791 02:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.791 02:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.791 02:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.791 02:15:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:12.791 00:13:12.791 real 0m42.436s 00:13:12.791 user 3m22.478s 00:13:12.791 sys 0m12.463s 00:13:12.791 02:15:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.791 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:12.791 ************************************ 00:13:12.791 END TEST nvmf_ns_hotplug_stress 00:13:12.791 ************************************ 00:13:12.791 02:15:12 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:12.791 02:15:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:12.791 02:15:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:12.791 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:12.791 ************************************ 00:13:12.791 START TEST nvmf_connect_stress 00:13:12.792 ************************************ 00:13:12.792 02:15:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.062 * Looking for test storage... 00:13:13.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.062 02:15:12 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.062 02:15:12 -- nvmf/common.sh@7 -- # uname -s 00:13:13.062 02:15:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.062 02:15:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.062 02:15:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.062 02:15:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.062 02:15:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.062 02:15:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.062 02:15:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.062 02:15:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.062 02:15:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.062 02:15:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.062 02:15:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:13.062 02:15:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:13.062 02:15:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.062 02:15:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.062 02:15:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.062 02:15:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.062 02:15:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.062 02:15:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.062 02:15:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.062 02:15:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.062 02:15:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.062 02:15:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.062 02:15:12 -- paths/export.sh@5 -- # export PATH 00:13:13.062 02:15:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.062 02:15:12 -- nvmf/common.sh@46 -- # : 0 00:13:13.062 02:15:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.062 02:15:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.062 02:15:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.062 02:15:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.063 02:15:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.063 02:15:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.063 02:15:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.063 02:15:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.063 02:15:12 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:13.063 02:15:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.063 02:15:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.063 02:15:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.063 02:15:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.063 02:15:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.063 02:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.063 02:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.063 02:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.063 02:15:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:13.063 02:15:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:13.063 02:15:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:13.063 02:15:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:13.063 02:15:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:13.063 02:15:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:13.063 02:15:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.063 02:15:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.063 02:15:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:13.063 02:15:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:13.063 02:15:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.063 02:15:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.063 02:15:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.063 02:15:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.063 02:15:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.063 02:15:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.063 02:15:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.063 02:15:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.063 02:15:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:13.063 02:15:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:13.063 Cannot find device "nvmf_tgt_br" 00:13:13.063 02:15:12 -- nvmf/common.sh@154 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.063 Cannot find device "nvmf_tgt_br2" 00:13:13.063 02:15:12 -- nvmf/common.sh@155 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:13.063 02:15:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:13.063 Cannot find device "nvmf_tgt_br" 00:13:13.063 02:15:12 -- nvmf/common.sh@157 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:13.063 Cannot find device "nvmf_tgt_br2" 00:13:13.063 02:15:12 -- nvmf/common.sh@158 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:13.063 02:15:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:13.063 02:15:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.063 02:15:12 -- nvmf/common.sh@161 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.063 02:15:12 -- nvmf/common.sh@162 -- # true 00:13:13.063 02:15:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.063 02:15:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.063 02:15:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.063 02:15:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.063 02:15:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.331 02:15:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.331 02:15:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.331 02:15:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:13.331 02:15:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:13.331 02:15:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:13.331 02:15:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:13.331 02:15:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:13.331 02:15:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:13.331 02:15:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.331 02:15:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.331 02:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.331 02:15:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:13.331 02:15:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:13.331 02:15:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.331 02:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.331 02:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.331 02:15:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.331 02:15:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.331 02:15:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:13.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:13.331 00:13:13.331 --- 10.0.0.2 ping statistics --- 00:13:13.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.331 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:13.331 02:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:13.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:13.331 00:13:13.331 --- 10.0.0.3 ping statistics --- 00:13:13.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.331 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:13.331 02:15:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:13.332 00:13:13.332 --- 10.0.0.1 ping statistics --- 00:13:13.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.332 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:13.332 02:15:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.332 02:15:12 -- nvmf/common.sh@421 -- # return 0 00:13:13.332 02:15:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:13.332 02:15:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.332 02:15:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:13.332 02:15:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:13.332 02:15:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.332 02:15:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:13.332 02:15:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.332 02:15:12 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:13.332 02:15:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.332 02:15:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:13.332 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.332 02:15:12 -- nvmf/common.sh@469 -- # nvmfpid=81032 00:13:13.332 02:15:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.332 02:15:12 -- nvmf/common.sh@470 -- # waitforlisten 81032 00:13:13.332 02:15:12 -- common/autotest_common.sh@819 -- # '[' -z 81032 ']' 00:13:13.332 02:15:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.332 02:15:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.332 02:15:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.332 02:15:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.332 02:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.332 [2024-07-15 02:15:12.817301] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:13.332 [2024-07-15 02:15:12.817416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.590 [2024-07-15 02:15:12.959791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.590 [2024-07-15 02:15:13.038437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.590 [2024-07-15 02:15:13.038631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.590 [2024-07-15 02:15:13.038648] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.590 [2024-07-15 02:15:13.038659] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.590 [2024-07-15 02:15:13.038854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.590 [2024-07-15 02:15:13.039571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.590 [2024-07-15 02:15:13.039498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.525 02:15:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:14.525 02:15:13 -- common/autotest_common.sh@852 -- # return 0 00:13:14.525 02:15:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:14.525 02:15:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.525 02:15:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.525 02:15:13 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.525 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.525 [2024-07-15 02:15:13.791634] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.525 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.525 02:15:13 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:14.525 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.525 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.525 02:15:13 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.525 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.525 [2024-07-15 02:15:13.811808] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.525 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.525 02:15:13 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:14.525 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.525 NULL1 00:13:14.525 02:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.525 02:15:13 -- target/connect_stress.sh@21 -- # PERF_PID=81084 00:13:14.525 02:15:13 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:14.525 02:15:13 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.525 02:15:13 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:14.525 02:15:13 -- target/connect_stress.sh@28 -- # cat 00:13:14.525 02:15:13 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:14.525 02:15:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.525 02:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.525 02:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:14.784 02:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.784 02:15:14 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:14.784 02:15:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.784 02:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.784 02:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:15.043 02:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.043 02:15:14 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:15.043 02:15:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.043 02:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.043 02:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:15.611 02:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.611 02:15:14 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:15.611 02:15:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.611 02:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.611 02:15:14 -- common/autotest_common.sh@10 -- # set +x 00:13:15.870 02:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.870 02:15:15 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:15.870 02:15:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.870 02:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.870 02:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:16.129 02:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.129 02:15:15 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:16.129 02:15:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.129 02:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.129 02:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:16.388 02:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.388 02:15:15 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:16.388 02:15:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.388 02:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.388 02:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:16.648 02:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.648 02:15:16 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:16.648 02:15:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.648 02:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.648 02:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:17.215 02:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.215 02:15:16 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:17.215 02:15:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.215 02:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.215 02:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:17.472 02:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.472 02:15:16 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:17.472 02:15:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.472 02:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.472 02:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:17.729 02:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.729 02:15:17 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:17.729 02:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.729 02:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.729 02:15:17 -- common/autotest_common.sh@10 -- # set +x 00:13:17.986 02:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.986 02:15:17 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:17.986 02:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.986 02:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.986 02:15:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.244 02:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.244 02:15:17 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:18.244 02:15:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.244 02:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.244 02:15:17 -- common/autotest_common.sh@10 -- # set +x 00:13:18.809 02:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.809 02:15:18 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:18.809 02:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.809 02:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.809 02:15:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.142 02:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.142 02:15:18 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:19.142 02:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.142 02:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.142 02:15:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.400 02:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.400 02:15:18 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:19.400 02:15:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.400 02:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.400 02:15:18 -- common/autotest_common.sh@10 -- # set +x 00:13:19.658 02:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.658 02:15:19 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:19.658 02:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.658 02:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.658 02:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:19.916 02:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.916 02:15:19 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:19.916 02:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.916 02:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.916 02:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.174 02:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.174 02:15:19 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:20.174 02:15:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.174 02:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.174 02:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:20.739 02:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.739 02:15:20 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:20.739 02:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.739 02:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.739 02:15:20 -- common/autotest_common.sh@10 -- # set +x 00:13:20.997 02:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.997 02:15:20 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:20.997 02:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.997 02:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.997 02:15:20 -- common/autotest_common.sh@10 -- # set +x 00:13:21.256 02:15:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.256 02:15:20 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:21.256 02:15:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.256 02:15:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.256 02:15:20 -- common/autotest_common.sh@10 -- # set +x 00:13:21.515 02:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.515 02:15:21 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:21.515 02:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.515 02:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.515 02:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.774 02:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.774 02:15:21 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:21.774 02:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.774 02:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.774 02:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:22.341 02:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.341 02:15:21 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:22.341 02:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.341 02:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.341 02:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:22.599 02:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.599 02:15:21 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:22.599 02:15:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.599 02:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.599 02:15:21 -- common/autotest_common.sh@10 -- # set +x 00:13:22.858 02:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.858 02:15:22 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:22.858 02:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.858 02:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.858 02:15:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.116 02:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.116 02:15:22 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:23.116 02:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.116 02:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.116 02:15:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.684 02:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.684 02:15:22 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:23.684 02:15:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.684 02:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.684 02:15:22 -- common/autotest_common.sh@10 -- # set +x 00:13:23.942 02:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.942 02:15:23 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:23.942 02:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.942 02:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.942 02:15:23 -- common/autotest_common.sh@10 -- # set +x 00:13:24.199 02:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.199 02:15:23 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:24.199 02:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.199 02:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.199 02:15:23 -- common/autotest_common.sh@10 -- # set +x 00:13:24.457 02:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.457 02:15:23 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:24.457 02:15:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.457 02:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.457 02:15:23 -- common/autotest_common.sh@10 -- # set +x 00:13:24.457 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.715 02:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.715 02:15:24 -- target/connect_stress.sh@34 -- # kill -0 81084 00:13:24.715 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81084) - No such process 00:13:24.715 02:15:24 -- target/connect_stress.sh@38 -- # wait 81084 00:13:24.715 02:15:24 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:24.715 02:15:24 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:24.715 02:15:24 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:24.715 02:15:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:24.715 02:15:24 -- nvmf/common.sh@116 -- # sync 00:13:24.974 02:15:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:24.974 02:15:24 -- nvmf/common.sh@119 -- # set +e 00:13:24.974 02:15:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:24.974 02:15:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:24.974 rmmod nvme_tcp 00:13:24.974 rmmod nvme_fabrics 00:13:24.974 rmmod nvme_keyring 00:13:24.974 02:15:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:24.974 02:15:24 -- nvmf/common.sh@123 -- # set -e 00:13:24.974 02:15:24 -- nvmf/common.sh@124 -- # return 0 00:13:24.974 02:15:24 -- nvmf/common.sh@477 -- # '[' -n 81032 ']' 00:13:24.974 02:15:24 -- nvmf/common.sh@478 -- # killprocess 81032 00:13:24.974 02:15:24 -- common/autotest_common.sh@926 -- # '[' -z 81032 ']' 00:13:24.974 02:15:24 -- common/autotest_common.sh@930 -- # kill -0 81032 00:13:24.974 02:15:24 -- common/autotest_common.sh@931 -- # uname 00:13:24.974 02:15:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:24.974 02:15:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81032 00:13:24.974 killing process with pid 81032 00:13:24.974 02:15:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:24.974 02:15:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:24.974 02:15:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81032' 00:13:24.974 02:15:24 -- common/autotest_common.sh@945 -- # kill 81032 00:13:24.974 02:15:24 -- common/autotest_common.sh@950 -- # wait 81032 00:13:25.232 02:15:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:25.232 02:15:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:25.232 02:15:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:25.232 02:15:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.232 02:15:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:25.232 02:15:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.232 02:15:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.232 02:15:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.232 02:15:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:25.232 ************************************ 00:13:25.232 END TEST nvmf_connect_stress 00:13:25.232 ************************************ 00:13:25.232 00:13:25.232 real 0m12.265s 00:13:25.232 user 0m41.325s 00:13:25.232 sys 0m2.910s 00:13:25.232 02:15:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.232 02:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 02:15:24 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.232 02:15:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:25.232 02:15:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.232 02:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 ************************************ 00:13:25.232 START TEST nvmf_fused_ordering 00:13:25.232 ************************************ 00:13:25.232 02:15:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:25.232 * Looking for test storage... 00:13:25.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.233 02:15:24 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.233 02:15:24 -- nvmf/common.sh@7 -- # uname -s 00:13:25.233 02:15:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.233 02:15:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.233 02:15:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.233 02:15:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.233 02:15:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.233 02:15:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.233 02:15:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.233 02:15:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.233 02:15:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.233 02:15:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:25.233 02:15:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:25.233 02:15:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.233 02:15:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.233 02:15:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.233 02:15:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.233 02:15:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.233 02:15:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.233 02:15:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.233 02:15:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.233 02:15:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.233 02:15:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.233 02:15:24 -- paths/export.sh@5 -- # export PATH 00:13:25.233 02:15:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.233 02:15:24 -- nvmf/common.sh@46 -- # : 0 00:13:25.233 02:15:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.233 02:15:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.233 02:15:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.233 02:15:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.233 02:15:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.233 02:15:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.233 02:15:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.233 02:15:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.233 02:15:24 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:25.233 02:15:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.233 02:15:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.233 02:15:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.233 02:15:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.233 02:15:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.233 02:15:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.233 02:15:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.233 02:15:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.233 02:15:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:25.233 02:15:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:25.233 02:15:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.233 02:15:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.233 02:15:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:25.233 02:15:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:25.233 02:15:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.233 02:15:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.233 02:15:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.233 02:15:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.233 02:15:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.233 02:15:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.233 02:15:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.233 02:15:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.233 02:15:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:25.233 02:15:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:25.233 Cannot find device "nvmf_tgt_br" 00:13:25.233 02:15:24 -- nvmf/common.sh@154 -- # true 00:13:25.233 02:15:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.491 Cannot find device "nvmf_tgt_br2" 00:13:25.491 02:15:24 -- nvmf/common.sh@155 -- # true 00:13:25.491 02:15:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:25.491 02:15:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:25.491 Cannot find device "nvmf_tgt_br" 00:13:25.491 02:15:24 -- nvmf/common.sh@157 -- # true 00:13:25.491 02:15:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:25.491 Cannot find device "nvmf_tgt_br2" 00:13:25.491 02:15:24 -- nvmf/common.sh@158 -- # true 00:13:25.491 02:15:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:25.491 02:15:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:25.491 02:15:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.491 02:15:24 -- nvmf/common.sh@161 -- # true 00:13:25.491 02:15:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.491 02:15:24 -- nvmf/common.sh@162 -- # true 00:13:25.491 02:15:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.491 02:15:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.491 02:15:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.491 02:15:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.491 02:15:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.491 02:15:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.491 02:15:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.491 02:15:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:25.491 02:15:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:25.491 02:15:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:25.491 02:15:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:25.491 02:15:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:25.491 02:15:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:25.491 02:15:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.491 02:15:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.491 02:15:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.491 02:15:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:25.491 02:15:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:25.491 02:15:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.491 02:15:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.491 02:15:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.491 02:15:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.491 02:15:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.491 02:15:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:25.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:25.491 00:13:25.491 --- 10.0.0.2 ping statistics --- 00:13:25.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.491 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:25.491 02:15:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:25.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:25.491 00:13:25.491 --- 10.0.0.3 ping statistics --- 00:13:25.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.491 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:25.491 02:15:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:25.491 00:13:25.491 --- 10.0.0.1 ping statistics --- 00:13:25.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.491 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:25.491 02:15:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.491 02:15:25 -- nvmf/common.sh@421 -- # return 0 00:13:25.491 02:15:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.491 02:15:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.491 02:15:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.491 02:15:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.491 02:15:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.491 02:15:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.491 02:15:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.748 02:15:25 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:25.748 02:15:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.748 02:15:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:25.748 02:15:25 -- common/autotest_common.sh@10 -- # set +x 00:13:25.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.748 02:15:25 -- nvmf/common.sh@469 -- # nvmfpid=81407 00:13:25.748 02:15:25 -- nvmf/common.sh@470 -- # waitforlisten 81407 00:13:25.748 02:15:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.748 02:15:25 -- common/autotest_common.sh@819 -- # '[' -z 81407 ']' 00:13:25.748 02:15:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.748 02:15:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.748 02:15:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.748 02:15:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.748 02:15:25 -- common/autotest_common.sh@10 -- # set +x 00:13:25.748 [2024-07-15 02:15:25.114465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:25.748 [2024-07-15 02:15:25.114554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.748 [2024-07-15 02:15:25.251638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.005 [2024-07-15 02:15:25.333333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.005 [2024-07-15 02:15:25.333485] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.005 [2024-07-15 02:15:25.333547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.005 [2024-07-15 02:15:25.333572] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.005 [2024-07-15 02:15:25.333598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.569 02:15:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:26.569 02:15:26 -- common/autotest_common.sh@852 -- # return 0 00:13:26.569 02:15:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.569 02:15:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:26.569 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 02:15:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.569 02:15:26 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.569 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.569 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 [2024-07-15 02:15:26.099939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.569 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.569 02:15:26 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.569 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.569 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.569 02:15:26 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.569 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.569 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 [2024-07-15 02:15:26.116079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.569 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.569 02:15:26 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.569 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.569 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.826 NULL1 00:13:26.826 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.826 02:15:26 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.826 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.826 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.826 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.826 02:15:26 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.826 02:15:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.826 02:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:26.826 02:15:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.826 02:15:26 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.826 [2024-07-15 02:15:26.165497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:26.826 [2024-07-15 02:15:26.165549] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81457 ] 00:13:27.083 Attached to nqn.2016-06.io.spdk:cnode1 00:13:27.083 Namespace ID: 1 size: 1GB 00:13:27.083 fused_ordering(0) 00:13:27.083 fused_ordering(1) 00:13:27.083 fused_ordering(2) 00:13:27.083 fused_ordering(3) 00:13:27.083 fused_ordering(4) 00:13:27.083 fused_ordering(5) 00:13:27.083 fused_ordering(6) 00:13:27.083 fused_ordering(7) 00:13:27.083 fused_ordering(8) 00:13:27.083 fused_ordering(9) 00:13:27.083 fused_ordering(10) 00:13:27.083 fused_ordering(11) 00:13:27.083 fused_ordering(12) 00:13:27.083 fused_ordering(13) 00:13:27.083 fused_ordering(14) 00:13:27.083 fused_ordering(15) 00:13:27.083 fused_ordering(16) 00:13:27.083 fused_ordering(17) 00:13:27.083 fused_ordering(18) 00:13:27.083 fused_ordering(19) 00:13:27.083 fused_ordering(20) 00:13:27.083 fused_ordering(21) 00:13:27.083 fused_ordering(22) 00:13:27.083 fused_ordering(23) 00:13:27.083 fused_ordering(24) 00:13:27.083 fused_ordering(25) 00:13:27.083 fused_ordering(26) 00:13:27.083 fused_ordering(27) 00:13:27.083 fused_ordering(28) 00:13:27.083 fused_ordering(29) 00:13:27.083 fused_ordering(30) 00:13:27.083 fused_ordering(31) 00:13:27.083 fused_ordering(32) 00:13:27.083 fused_ordering(33) 00:13:27.083 fused_ordering(34) 00:13:27.083 fused_ordering(35) 00:13:27.083 fused_ordering(36) 00:13:27.083 fused_ordering(37) 00:13:27.083 fused_ordering(38) 00:13:27.083 fused_ordering(39) 00:13:27.083 fused_ordering(40) 00:13:27.083 fused_ordering(41) 00:13:27.083 fused_ordering(42) 00:13:27.083 fused_ordering(43) 00:13:27.083 fused_ordering(44) 00:13:27.083 fused_ordering(45) 00:13:27.083 fused_ordering(46) 00:13:27.083 fused_ordering(47) 00:13:27.083 fused_ordering(48) 00:13:27.083 fused_ordering(49) 00:13:27.083 fused_ordering(50) 00:13:27.083 fused_ordering(51) 00:13:27.083 fused_ordering(52) 00:13:27.083 fused_ordering(53) 00:13:27.083 fused_ordering(54) 00:13:27.083 fused_ordering(55) 00:13:27.083 fused_ordering(56) 00:13:27.083 fused_ordering(57) 00:13:27.083 fused_ordering(58) 00:13:27.083 fused_ordering(59) 00:13:27.083 fused_ordering(60) 00:13:27.083 fused_ordering(61) 00:13:27.083 fused_ordering(62) 00:13:27.083 fused_ordering(63) 00:13:27.083 fused_ordering(64) 00:13:27.083 fused_ordering(65) 00:13:27.083 fused_ordering(66) 00:13:27.083 fused_ordering(67) 00:13:27.083 fused_ordering(68) 00:13:27.083 fused_ordering(69) 00:13:27.083 fused_ordering(70) 00:13:27.083 fused_ordering(71) 00:13:27.083 fused_ordering(72) 00:13:27.083 fused_ordering(73) 00:13:27.083 fused_ordering(74) 00:13:27.083 fused_ordering(75) 00:13:27.083 fused_ordering(76) 00:13:27.083 fused_ordering(77) 00:13:27.083 fused_ordering(78) 00:13:27.083 fused_ordering(79) 00:13:27.083 fused_ordering(80) 00:13:27.083 fused_ordering(81) 00:13:27.083 fused_ordering(82) 00:13:27.083 fused_ordering(83) 00:13:27.083 fused_ordering(84) 00:13:27.083 fused_ordering(85) 00:13:27.083 fused_ordering(86) 00:13:27.083 fused_ordering(87) 00:13:27.083 fused_ordering(88) 00:13:27.083 fused_ordering(89) 00:13:27.083 fused_ordering(90) 00:13:27.083 fused_ordering(91) 00:13:27.083 fused_ordering(92) 00:13:27.083 fused_ordering(93) 00:13:27.083 fused_ordering(94) 00:13:27.083 fused_ordering(95) 00:13:27.083 fused_ordering(96) 00:13:27.083 fused_ordering(97) 00:13:27.083 fused_ordering(98) 00:13:27.083 fused_ordering(99) 00:13:27.083 fused_ordering(100) 00:13:27.083 fused_ordering(101) 00:13:27.083 fused_ordering(102) 00:13:27.083 fused_ordering(103) 00:13:27.083 fused_ordering(104) 00:13:27.083 fused_ordering(105) 00:13:27.083 fused_ordering(106) 00:13:27.083 fused_ordering(107) 00:13:27.083 fused_ordering(108) 00:13:27.083 fused_ordering(109) 00:13:27.083 fused_ordering(110) 00:13:27.083 fused_ordering(111) 00:13:27.083 fused_ordering(112) 00:13:27.083 fused_ordering(113) 00:13:27.083 fused_ordering(114) 00:13:27.083 fused_ordering(115) 00:13:27.083 fused_ordering(116) 00:13:27.083 fused_ordering(117) 00:13:27.083 fused_ordering(118) 00:13:27.083 fused_ordering(119) 00:13:27.083 fused_ordering(120) 00:13:27.083 fused_ordering(121) 00:13:27.083 fused_ordering(122) 00:13:27.083 fused_ordering(123) 00:13:27.083 fused_ordering(124) 00:13:27.083 fused_ordering(125) 00:13:27.083 fused_ordering(126) 00:13:27.083 fused_ordering(127) 00:13:27.083 fused_ordering(128) 00:13:27.083 fused_ordering(129) 00:13:27.083 fused_ordering(130) 00:13:27.083 fused_ordering(131) 00:13:27.083 fused_ordering(132) 00:13:27.083 fused_ordering(133) 00:13:27.083 fused_ordering(134) 00:13:27.083 fused_ordering(135) 00:13:27.083 fused_ordering(136) 00:13:27.083 fused_ordering(137) 00:13:27.083 fused_ordering(138) 00:13:27.083 fused_ordering(139) 00:13:27.083 fused_ordering(140) 00:13:27.083 fused_ordering(141) 00:13:27.083 fused_ordering(142) 00:13:27.083 fused_ordering(143) 00:13:27.083 fused_ordering(144) 00:13:27.083 fused_ordering(145) 00:13:27.083 fused_ordering(146) 00:13:27.083 fused_ordering(147) 00:13:27.083 fused_ordering(148) 00:13:27.083 fused_ordering(149) 00:13:27.083 fused_ordering(150) 00:13:27.083 fused_ordering(151) 00:13:27.083 fused_ordering(152) 00:13:27.083 fused_ordering(153) 00:13:27.083 fused_ordering(154) 00:13:27.083 fused_ordering(155) 00:13:27.083 fused_ordering(156) 00:13:27.083 fused_ordering(157) 00:13:27.083 fused_ordering(158) 00:13:27.083 fused_ordering(159) 00:13:27.083 fused_ordering(160) 00:13:27.084 fused_ordering(161) 00:13:27.084 fused_ordering(162) 00:13:27.084 fused_ordering(163) 00:13:27.084 fused_ordering(164) 00:13:27.084 fused_ordering(165) 00:13:27.084 fused_ordering(166) 00:13:27.084 fused_ordering(167) 00:13:27.084 fused_ordering(168) 00:13:27.084 fused_ordering(169) 00:13:27.084 fused_ordering(170) 00:13:27.084 fused_ordering(171) 00:13:27.084 fused_ordering(172) 00:13:27.084 fused_ordering(173) 00:13:27.084 fused_ordering(174) 00:13:27.084 fused_ordering(175) 00:13:27.084 fused_ordering(176) 00:13:27.084 fused_ordering(177) 00:13:27.084 fused_ordering(178) 00:13:27.084 fused_ordering(179) 00:13:27.084 fused_ordering(180) 00:13:27.084 fused_ordering(181) 00:13:27.084 fused_ordering(182) 00:13:27.084 fused_ordering(183) 00:13:27.084 fused_ordering(184) 00:13:27.084 fused_ordering(185) 00:13:27.084 fused_ordering(186) 00:13:27.084 fused_ordering(187) 00:13:27.084 fused_ordering(188) 00:13:27.084 fused_ordering(189) 00:13:27.084 fused_ordering(190) 00:13:27.084 fused_ordering(191) 00:13:27.084 fused_ordering(192) 00:13:27.084 fused_ordering(193) 00:13:27.084 fused_ordering(194) 00:13:27.084 fused_ordering(195) 00:13:27.084 fused_ordering(196) 00:13:27.084 fused_ordering(197) 00:13:27.084 fused_ordering(198) 00:13:27.084 fused_ordering(199) 00:13:27.084 fused_ordering(200) 00:13:27.084 fused_ordering(201) 00:13:27.084 fused_ordering(202) 00:13:27.084 fused_ordering(203) 00:13:27.084 fused_ordering(204) 00:13:27.084 fused_ordering(205) 00:13:27.648 fused_ordering(206) 00:13:27.648 fused_ordering(207) 00:13:27.648 fused_ordering(208) 00:13:27.648 fused_ordering(209) 00:13:27.648 fused_ordering(210) 00:13:27.648 fused_ordering(211) 00:13:27.648 fused_ordering(212) 00:13:27.648 fused_ordering(213) 00:13:27.648 fused_ordering(214) 00:13:27.648 fused_ordering(215) 00:13:27.648 fused_ordering(216) 00:13:27.648 fused_ordering(217) 00:13:27.648 fused_ordering(218) 00:13:27.648 fused_ordering(219) 00:13:27.648 fused_ordering(220) 00:13:27.648 fused_ordering(221) 00:13:27.648 fused_ordering(222) 00:13:27.648 fused_ordering(223) 00:13:27.648 fused_ordering(224) 00:13:27.648 fused_ordering(225) 00:13:27.648 fused_ordering(226) 00:13:27.648 fused_ordering(227) 00:13:27.648 fused_ordering(228) 00:13:27.648 fused_ordering(229) 00:13:27.648 fused_ordering(230) 00:13:27.648 fused_ordering(231) 00:13:27.648 fused_ordering(232) 00:13:27.648 fused_ordering(233) 00:13:27.648 fused_ordering(234) 00:13:27.648 fused_ordering(235) 00:13:27.648 fused_ordering(236) 00:13:27.648 fused_ordering(237) 00:13:27.648 fused_ordering(238) 00:13:27.648 fused_ordering(239) 00:13:27.648 fused_ordering(240) 00:13:27.648 fused_ordering(241) 00:13:27.648 fused_ordering(242) 00:13:27.648 fused_ordering(243) 00:13:27.648 fused_ordering(244) 00:13:27.648 fused_ordering(245) 00:13:27.648 fused_ordering(246) 00:13:27.648 fused_ordering(247) 00:13:27.648 fused_ordering(248) 00:13:27.648 fused_ordering(249) 00:13:27.648 fused_ordering(250) 00:13:27.648 fused_ordering(251) 00:13:27.648 fused_ordering(252) 00:13:27.648 fused_ordering(253) 00:13:27.648 fused_ordering(254) 00:13:27.648 fused_ordering(255) 00:13:27.648 fused_ordering(256) 00:13:27.648 fused_ordering(257) 00:13:27.648 fused_ordering(258) 00:13:27.648 fused_ordering(259) 00:13:27.648 fused_ordering(260) 00:13:27.648 fused_ordering(261) 00:13:27.648 fused_ordering(262) 00:13:27.648 fused_ordering(263) 00:13:27.648 fused_ordering(264) 00:13:27.648 fused_ordering(265) 00:13:27.648 fused_ordering(266) 00:13:27.648 fused_ordering(267) 00:13:27.648 fused_ordering(268) 00:13:27.648 fused_ordering(269) 00:13:27.648 fused_ordering(270) 00:13:27.648 fused_ordering(271) 00:13:27.648 fused_ordering(272) 00:13:27.648 fused_ordering(273) 00:13:27.648 fused_ordering(274) 00:13:27.648 fused_ordering(275) 00:13:27.648 fused_ordering(276) 00:13:27.648 fused_ordering(277) 00:13:27.648 fused_ordering(278) 00:13:27.648 fused_ordering(279) 00:13:27.648 fused_ordering(280) 00:13:27.648 fused_ordering(281) 00:13:27.648 fused_ordering(282) 00:13:27.648 fused_ordering(283) 00:13:27.648 fused_ordering(284) 00:13:27.648 fused_ordering(285) 00:13:27.648 fused_ordering(286) 00:13:27.648 fused_ordering(287) 00:13:27.648 fused_ordering(288) 00:13:27.648 fused_ordering(289) 00:13:27.648 fused_ordering(290) 00:13:27.648 fused_ordering(291) 00:13:27.648 fused_ordering(292) 00:13:27.648 fused_ordering(293) 00:13:27.648 fused_ordering(294) 00:13:27.648 fused_ordering(295) 00:13:27.648 fused_ordering(296) 00:13:27.648 fused_ordering(297) 00:13:27.648 fused_ordering(298) 00:13:27.648 fused_ordering(299) 00:13:27.648 fused_ordering(300) 00:13:27.648 fused_ordering(301) 00:13:27.648 fused_ordering(302) 00:13:27.648 fused_ordering(303) 00:13:27.648 fused_ordering(304) 00:13:27.648 fused_ordering(305) 00:13:27.648 fused_ordering(306) 00:13:27.648 fused_ordering(307) 00:13:27.648 fused_ordering(308) 00:13:27.648 fused_ordering(309) 00:13:27.648 fused_ordering(310) 00:13:27.648 fused_ordering(311) 00:13:27.648 fused_ordering(312) 00:13:27.648 fused_ordering(313) 00:13:27.648 fused_ordering(314) 00:13:27.648 fused_ordering(315) 00:13:27.648 fused_ordering(316) 00:13:27.648 fused_ordering(317) 00:13:27.648 fused_ordering(318) 00:13:27.648 fused_ordering(319) 00:13:27.648 fused_ordering(320) 00:13:27.648 fused_ordering(321) 00:13:27.648 fused_ordering(322) 00:13:27.648 fused_ordering(323) 00:13:27.648 fused_ordering(324) 00:13:27.648 fused_ordering(325) 00:13:27.648 fused_ordering(326) 00:13:27.648 fused_ordering(327) 00:13:27.648 fused_ordering(328) 00:13:27.648 fused_ordering(329) 00:13:27.648 fused_ordering(330) 00:13:27.648 fused_ordering(331) 00:13:27.648 fused_ordering(332) 00:13:27.648 fused_ordering(333) 00:13:27.648 fused_ordering(334) 00:13:27.648 fused_ordering(335) 00:13:27.648 fused_ordering(336) 00:13:27.648 fused_ordering(337) 00:13:27.648 fused_ordering(338) 00:13:27.648 fused_ordering(339) 00:13:27.648 fused_ordering(340) 00:13:27.648 fused_ordering(341) 00:13:27.648 fused_ordering(342) 00:13:27.648 fused_ordering(343) 00:13:27.648 fused_ordering(344) 00:13:27.648 fused_ordering(345) 00:13:27.648 fused_ordering(346) 00:13:27.648 fused_ordering(347) 00:13:27.648 fused_ordering(348) 00:13:27.648 fused_ordering(349) 00:13:27.648 fused_ordering(350) 00:13:27.648 fused_ordering(351) 00:13:27.648 fused_ordering(352) 00:13:27.648 fused_ordering(353) 00:13:27.648 fused_ordering(354) 00:13:27.648 fused_ordering(355) 00:13:27.648 fused_ordering(356) 00:13:27.648 fused_ordering(357) 00:13:27.648 fused_ordering(358) 00:13:27.648 fused_ordering(359) 00:13:27.648 fused_ordering(360) 00:13:27.648 fused_ordering(361) 00:13:27.648 fused_ordering(362) 00:13:27.648 fused_ordering(363) 00:13:27.648 fused_ordering(364) 00:13:27.648 fused_ordering(365) 00:13:27.648 fused_ordering(366) 00:13:27.648 fused_ordering(367) 00:13:27.648 fused_ordering(368) 00:13:27.648 fused_ordering(369) 00:13:27.648 fused_ordering(370) 00:13:27.648 fused_ordering(371) 00:13:27.648 fused_ordering(372) 00:13:27.648 fused_ordering(373) 00:13:27.648 fused_ordering(374) 00:13:27.648 fused_ordering(375) 00:13:27.648 fused_ordering(376) 00:13:27.648 fused_ordering(377) 00:13:27.648 fused_ordering(378) 00:13:27.648 fused_ordering(379) 00:13:27.648 fused_ordering(380) 00:13:27.648 fused_ordering(381) 00:13:27.648 fused_ordering(382) 00:13:27.648 fused_ordering(383) 00:13:27.648 fused_ordering(384) 00:13:27.648 fused_ordering(385) 00:13:27.648 fused_ordering(386) 00:13:27.648 fused_ordering(387) 00:13:27.648 fused_ordering(388) 00:13:27.648 fused_ordering(389) 00:13:27.648 fused_ordering(390) 00:13:27.648 fused_ordering(391) 00:13:27.648 fused_ordering(392) 00:13:27.648 fused_ordering(393) 00:13:27.648 fused_ordering(394) 00:13:27.648 fused_ordering(395) 00:13:27.648 fused_ordering(396) 00:13:27.648 fused_ordering(397) 00:13:27.648 fused_ordering(398) 00:13:27.648 fused_ordering(399) 00:13:27.648 fused_ordering(400) 00:13:27.649 fused_ordering(401) 00:13:27.649 fused_ordering(402) 00:13:27.649 fused_ordering(403) 00:13:27.649 fused_ordering(404) 00:13:27.649 fused_ordering(405) 00:13:27.649 fused_ordering(406) 00:13:27.649 fused_ordering(407) 00:13:27.649 fused_ordering(408) 00:13:27.649 fused_ordering(409) 00:13:27.649 fused_ordering(410) 00:13:27.907 fused_ordering(411) 00:13:27.907 fused_ordering(412) 00:13:27.907 fused_ordering(413) 00:13:27.907 fused_ordering(414) 00:13:27.907 fused_ordering(415) 00:13:27.907 fused_ordering(416) 00:13:27.907 fused_ordering(417) 00:13:27.907 fused_ordering(418) 00:13:27.907 fused_ordering(419) 00:13:27.907 fused_ordering(420) 00:13:27.907 fused_ordering(421) 00:13:27.907 fused_ordering(422) 00:13:27.907 fused_ordering(423) 00:13:27.907 fused_ordering(424) 00:13:27.907 fused_ordering(425) 00:13:27.907 fused_ordering(426) 00:13:27.907 fused_ordering(427) 00:13:27.907 fused_ordering(428) 00:13:27.907 fused_ordering(429) 00:13:27.907 fused_ordering(430) 00:13:27.907 fused_ordering(431) 00:13:27.907 fused_ordering(432) 00:13:27.907 fused_ordering(433) 00:13:27.907 fused_ordering(434) 00:13:27.907 fused_ordering(435) 00:13:27.907 fused_ordering(436) 00:13:27.907 fused_ordering(437) 00:13:27.907 fused_ordering(438) 00:13:27.907 fused_ordering(439) 00:13:27.907 fused_ordering(440) 00:13:27.907 fused_ordering(441) 00:13:27.907 fused_ordering(442) 00:13:27.907 fused_ordering(443) 00:13:27.907 fused_ordering(444) 00:13:27.907 fused_ordering(445) 00:13:27.907 fused_ordering(446) 00:13:27.907 fused_ordering(447) 00:13:27.907 fused_ordering(448) 00:13:27.907 fused_ordering(449) 00:13:27.907 fused_ordering(450) 00:13:27.907 fused_ordering(451) 00:13:27.907 fused_ordering(452) 00:13:27.907 fused_ordering(453) 00:13:27.907 fused_ordering(454) 00:13:27.907 fused_ordering(455) 00:13:27.907 fused_ordering(456) 00:13:27.907 fused_ordering(457) 00:13:27.907 fused_ordering(458) 00:13:27.907 fused_ordering(459) 00:13:27.907 fused_ordering(460) 00:13:27.907 fused_ordering(461) 00:13:27.907 fused_ordering(462) 00:13:27.907 fused_ordering(463) 00:13:27.907 fused_ordering(464) 00:13:27.907 fused_ordering(465) 00:13:27.907 fused_ordering(466) 00:13:27.907 fused_ordering(467) 00:13:27.907 fused_ordering(468) 00:13:27.907 fused_ordering(469) 00:13:27.907 fused_ordering(470) 00:13:27.907 fused_ordering(471) 00:13:27.907 fused_ordering(472) 00:13:27.907 fused_ordering(473) 00:13:27.907 fused_ordering(474) 00:13:27.907 fused_ordering(475) 00:13:27.907 fused_ordering(476) 00:13:27.907 fused_ordering(477) 00:13:27.907 fused_ordering(478) 00:13:27.907 fused_ordering(479) 00:13:27.907 fused_ordering(480) 00:13:27.907 fused_ordering(481) 00:13:27.907 fused_ordering(482) 00:13:27.907 fused_ordering(483) 00:13:27.907 fused_ordering(484) 00:13:27.907 fused_ordering(485) 00:13:27.907 fused_ordering(486) 00:13:27.907 fused_ordering(487) 00:13:27.907 fused_ordering(488) 00:13:27.907 fused_ordering(489) 00:13:27.907 fused_ordering(490) 00:13:27.907 fused_ordering(491) 00:13:27.907 fused_ordering(492) 00:13:27.907 fused_ordering(493) 00:13:27.907 fused_ordering(494) 00:13:27.907 fused_ordering(495) 00:13:27.907 fused_ordering(496) 00:13:27.907 fused_ordering(497) 00:13:27.907 fused_ordering(498) 00:13:27.907 fused_ordering(499) 00:13:27.907 fused_ordering(500) 00:13:27.907 fused_ordering(501) 00:13:27.907 fused_ordering(502) 00:13:27.907 fused_ordering(503) 00:13:27.907 fused_ordering(504) 00:13:27.907 fused_ordering(505) 00:13:27.907 fused_ordering(506) 00:13:27.907 fused_ordering(507) 00:13:27.907 fused_ordering(508) 00:13:27.907 fused_ordering(509) 00:13:27.907 fused_ordering(510) 00:13:27.907 fused_ordering(511) 00:13:27.907 fused_ordering(512) 00:13:27.907 fused_ordering(513) 00:13:27.907 fused_ordering(514) 00:13:27.907 fused_ordering(515) 00:13:27.907 fused_ordering(516) 00:13:27.907 fused_ordering(517) 00:13:27.907 fused_ordering(518) 00:13:27.907 fused_ordering(519) 00:13:27.907 fused_ordering(520) 00:13:27.907 fused_ordering(521) 00:13:27.907 fused_ordering(522) 00:13:27.907 fused_ordering(523) 00:13:27.907 fused_ordering(524) 00:13:27.907 fused_ordering(525) 00:13:27.907 fused_ordering(526) 00:13:27.907 fused_ordering(527) 00:13:27.907 fused_ordering(528) 00:13:27.907 fused_ordering(529) 00:13:27.907 fused_ordering(530) 00:13:27.907 fused_ordering(531) 00:13:27.907 fused_ordering(532) 00:13:27.907 fused_ordering(533) 00:13:27.907 fused_ordering(534) 00:13:27.907 fused_ordering(535) 00:13:27.907 fused_ordering(536) 00:13:27.907 fused_ordering(537) 00:13:27.907 fused_ordering(538) 00:13:27.907 fused_ordering(539) 00:13:27.907 fused_ordering(540) 00:13:27.907 fused_ordering(541) 00:13:27.907 fused_ordering(542) 00:13:27.907 fused_ordering(543) 00:13:27.907 fused_ordering(544) 00:13:27.907 fused_ordering(545) 00:13:27.907 fused_ordering(546) 00:13:27.907 fused_ordering(547) 00:13:27.907 fused_ordering(548) 00:13:27.907 fused_ordering(549) 00:13:27.907 fused_ordering(550) 00:13:27.907 fused_ordering(551) 00:13:27.907 fused_ordering(552) 00:13:27.907 fused_ordering(553) 00:13:27.907 fused_ordering(554) 00:13:27.907 fused_ordering(555) 00:13:27.907 fused_ordering(556) 00:13:27.907 fused_ordering(557) 00:13:27.907 fused_ordering(558) 00:13:27.907 fused_ordering(559) 00:13:27.907 fused_ordering(560) 00:13:27.907 fused_ordering(561) 00:13:27.907 fused_ordering(562) 00:13:27.907 fused_ordering(563) 00:13:27.907 fused_ordering(564) 00:13:27.907 fused_ordering(565) 00:13:27.907 fused_ordering(566) 00:13:27.907 fused_ordering(567) 00:13:27.907 fused_ordering(568) 00:13:27.907 fused_ordering(569) 00:13:27.907 fused_ordering(570) 00:13:27.907 fused_ordering(571) 00:13:27.907 fused_ordering(572) 00:13:27.907 fused_ordering(573) 00:13:27.907 fused_ordering(574) 00:13:27.907 fused_ordering(575) 00:13:27.907 fused_ordering(576) 00:13:27.907 fused_ordering(577) 00:13:27.907 fused_ordering(578) 00:13:27.907 fused_ordering(579) 00:13:27.907 fused_ordering(580) 00:13:27.907 fused_ordering(581) 00:13:27.907 fused_ordering(582) 00:13:27.907 fused_ordering(583) 00:13:27.907 fused_ordering(584) 00:13:27.907 fused_ordering(585) 00:13:27.907 fused_ordering(586) 00:13:27.907 fused_ordering(587) 00:13:27.907 fused_ordering(588) 00:13:27.907 fused_ordering(589) 00:13:27.907 fused_ordering(590) 00:13:27.907 fused_ordering(591) 00:13:27.907 fused_ordering(592) 00:13:27.907 fused_ordering(593) 00:13:27.907 fused_ordering(594) 00:13:27.907 fused_ordering(595) 00:13:27.907 fused_ordering(596) 00:13:27.907 fused_ordering(597) 00:13:27.907 fused_ordering(598) 00:13:27.907 fused_ordering(599) 00:13:27.907 fused_ordering(600) 00:13:27.907 fused_ordering(601) 00:13:27.907 fused_ordering(602) 00:13:27.907 fused_ordering(603) 00:13:27.907 fused_ordering(604) 00:13:27.907 fused_ordering(605) 00:13:27.907 fused_ordering(606) 00:13:27.907 fused_ordering(607) 00:13:27.907 fused_ordering(608) 00:13:27.907 fused_ordering(609) 00:13:27.907 fused_ordering(610) 00:13:27.907 fused_ordering(611) 00:13:27.907 fused_ordering(612) 00:13:27.907 fused_ordering(613) 00:13:27.907 fused_ordering(614) 00:13:27.907 fused_ordering(615) 00:13:28.164 fused_ordering(616) 00:13:28.164 fused_ordering(617) 00:13:28.164 fused_ordering(618) 00:13:28.164 fused_ordering(619) 00:13:28.164 fused_ordering(620) 00:13:28.164 fused_ordering(621) 00:13:28.164 fused_ordering(622) 00:13:28.164 fused_ordering(623) 00:13:28.164 fused_ordering(624) 00:13:28.164 fused_ordering(625) 00:13:28.164 fused_ordering(626) 00:13:28.164 fused_ordering(627) 00:13:28.164 fused_ordering(628) 00:13:28.164 fused_ordering(629) 00:13:28.164 fused_ordering(630) 00:13:28.164 fused_ordering(631) 00:13:28.164 fused_ordering(632) 00:13:28.164 fused_ordering(633) 00:13:28.164 fused_ordering(634) 00:13:28.164 fused_ordering(635) 00:13:28.164 fused_ordering(636) 00:13:28.164 fused_ordering(637) 00:13:28.164 fused_ordering(638) 00:13:28.164 fused_ordering(639) 00:13:28.164 fused_ordering(640) 00:13:28.164 fused_ordering(641) 00:13:28.164 fused_ordering(642) 00:13:28.164 fused_ordering(643) 00:13:28.164 fused_ordering(644) 00:13:28.164 fused_ordering(645) 00:13:28.164 fused_ordering(646) 00:13:28.164 fused_ordering(647) 00:13:28.164 fused_ordering(648) 00:13:28.164 fused_ordering(649) 00:13:28.164 fused_ordering(650) 00:13:28.164 fused_ordering(651) 00:13:28.164 fused_ordering(652) 00:13:28.164 fused_ordering(653) 00:13:28.164 fused_ordering(654) 00:13:28.164 fused_ordering(655) 00:13:28.164 fused_ordering(656) 00:13:28.164 fused_ordering(657) 00:13:28.164 fused_ordering(658) 00:13:28.164 fused_ordering(659) 00:13:28.164 fused_ordering(660) 00:13:28.164 fused_ordering(661) 00:13:28.164 fused_ordering(662) 00:13:28.164 fused_ordering(663) 00:13:28.164 fused_ordering(664) 00:13:28.165 fused_ordering(665) 00:13:28.165 fused_ordering(666) 00:13:28.165 fused_ordering(667) 00:13:28.165 fused_ordering(668) 00:13:28.165 fused_ordering(669) 00:13:28.165 fused_ordering(670) 00:13:28.165 fused_ordering(671) 00:13:28.165 fused_ordering(672) 00:13:28.165 fused_ordering(673) 00:13:28.165 fused_ordering(674) 00:13:28.165 fused_ordering(675) 00:13:28.165 fused_ordering(676) 00:13:28.165 fused_ordering(677) 00:13:28.165 fused_ordering(678) 00:13:28.165 fused_ordering(679) 00:13:28.165 fused_ordering(680) 00:13:28.165 fused_ordering(681) 00:13:28.165 fused_ordering(682) 00:13:28.165 fused_ordering(683) 00:13:28.165 fused_ordering(684) 00:13:28.165 fused_ordering(685) 00:13:28.165 fused_ordering(686) 00:13:28.165 fused_ordering(687) 00:13:28.165 fused_ordering(688) 00:13:28.165 fused_ordering(689) 00:13:28.165 fused_ordering(690) 00:13:28.165 fused_ordering(691) 00:13:28.165 fused_ordering(692) 00:13:28.165 fused_ordering(693) 00:13:28.165 fused_ordering(694) 00:13:28.165 fused_ordering(695) 00:13:28.165 fused_ordering(696) 00:13:28.165 fused_ordering(697) 00:13:28.165 fused_ordering(698) 00:13:28.165 fused_ordering(699) 00:13:28.165 fused_ordering(700) 00:13:28.165 fused_ordering(701) 00:13:28.165 fused_ordering(702) 00:13:28.165 fused_ordering(703) 00:13:28.165 fused_ordering(704) 00:13:28.165 fused_ordering(705) 00:13:28.165 fused_ordering(706) 00:13:28.165 fused_ordering(707) 00:13:28.165 fused_ordering(708) 00:13:28.165 fused_ordering(709) 00:13:28.165 fused_ordering(710) 00:13:28.165 fused_ordering(711) 00:13:28.165 fused_ordering(712) 00:13:28.165 fused_ordering(713) 00:13:28.165 fused_ordering(714) 00:13:28.165 fused_ordering(715) 00:13:28.165 fused_ordering(716) 00:13:28.165 fused_ordering(717) 00:13:28.165 fused_ordering(718) 00:13:28.165 fused_ordering(719) 00:13:28.165 fused_ordering(720) 00:13:28.165 fused_ordering(721) 00:13:28.165 fused_ordering(722) 00:13:28.165 fused_ordering(723) 00:13:28.165 fused_ordering(724) 00:13:28.165 fused_ordering(725) 00:13:28.165 fused_ordering(726) 00:13:28.165 fused_ordering(727) 00:13:28.165 fused_ordering(728) 00:13:28.165 fused_ordering(729) 00:13:28.165 fused_ordering(730) 00:13:28.165 fused_ordering(731) 00:13:28.165 fused_ordering(732) 00:13:28.165 fused_ordering(733) 00:13:28.165 fused_ordering(734) 00:13:28.165 fused_ordering(735) 00:13:28.165 fused_ordering(736) 00:13:28.165 fused_ordering(737) 00:13:28.165 fused_ordering(738) 00:13:28.165 fused_ordering(739) 00:13:28.165 fused_ordering(740) 00:13:28.165 fused_ordering(741) 00:13:28.165 fused_ordering(742) 00:13:28.165 fused_ordering(743) 00:13:28.165 fused_ordering(744) 00:13:28.165 fused_ordering(745) 00:13:28.165 fused_ordering(746) 00:13:28.165 fused_ordering(747) 00:13:28.165 fused_ordering(748) 00:13:28.165 fused_ordering(749) 00:13:28.165 fused_ordering(750) 00:13:28.165 fused_ordering(751) 00:13:28.165 fused_ordering(752) 00:13:28.165 fused_ordering(753) 00:13:28.165 fused_ordering(754) 00:13:28.165 fused_ordering(755) 00:13:28.165 fused_ordering(756) 00:13:28.165 fused_ordering(757) 00:13:28.165 fused_ordering(758) 00:13:28.165 fused_ordering(759) 00:13:28.165 fused_ordering(760) 00:13:28.165 fused_ordering(761) 00:13:28.165 fused_ordering(762) 00:13:28.165 fused_ordering(763) 00:13:28.165 fused_ordering(764) 00:13:28.165 fused_ordering(765) 00:13:28.165 fused_ordering(766) 00:13:28.165 fused_ordering(767) 00:13:28.165 fused_ordering(768) 00:13:28.165 fused_ordering(769) 00:13:28.165 fused_ordering(770) 00:13:28.165 fused_ordering(771) 00:13:28.165 fused_ordering(772) 00:13:28.165 fused_ordering(773) 00:13:28.165 fused_ordering(774) 00:13:28.165 fused_ordering(775) 00:13:28.165 fused_ordering(776) 00:13:28.165 fused_ordering(777) 00:13:28.165 fused_ordering(778) 00:13:28.165 fused_ordering(779) 00:13:28.165 fused_ordering(780) 00:13:28.165 fused_ordering(781) 00:13:28.165 fused_ordering(782) 00:13:28.165 fused_ordering(783) 00:13:28.165 fused_ordering(784) 00:13:28.165 fused_ordering(785) 00:13:28.165 fused_ordering(786) 00:13:28.165 fused_ordering(787) 00:13:28.165 fused_ordering(788) 00:13:28.165 fused_ordering(789) 00:13:28.165 fused_ordering(790) 00:13:28.165 fused_ordering(791) 00:13:28.165 fused_ordering(792) 00:13:28.165 fused_ordering(793) 00:13:28.165 fused_ordering(794) 00:13:28.165 fused_ordering(795) 00:13:28.165 fused_ordering(796) 00:13:28.165 fused_ordering(797) 00:13:28.165 fused_ordering(798) 00:13:28.165 fused_ordering(799) 00:13:28.165 fused_ordering(800) 00:13:28.165 fused_ordering(801) 00:13:28.165 fused_ordering(802) 00:13:28.165 fused_ordering(803) 00:13:28.165 fused_ordering(804) 00:13:28.165 fused_ordering(805) 00:13:28.165 fused_ordering(806) 00:13:28.165 fused_ordering(807) 00:13:28.165 fused_ordering(808) 00:13:28.165 fused_ordering(809) 00:13:28.165 fused_ordering(810) 00:13:28.165 fused_ordering(811) 00:13:28.165 fused_ordering(812) 00:13:28.165 fused_ordering(813) 00:13:28.165 fused_ordering(814) 00:13:28.165 fused_ordering(815) 00:13:28.165 fused_ordering(816) 00:13:28.165 fused_ordering(817) 00:13:28.165 fused_ordering(818) 00:13:28.165 fused_ordering(819) 00:13:28.165 fused_ordering(820) 00:13:28.731 fused_ordering(821) 00:13:28.731 fused_ordering(822) 00:13:28.731 fused_ordering(823) 00:13:28.731 fused_ordering(824) 00:13:28.731 fused_ordering(825) 00:13:28.731 fused_ordering(826) 00:13:28.731 fused_ordering(827) 00:13:28.731 fused_ordering(828) 00:13:28.731 fused_ordering(829) 00:13:28.731 fused_ordering(830) 00:13:28.731 fused_ordering(831) 00:13:28.731 fused_ordering(832) 00:13:28.731 fused_ordering(833) 00:13:28.731 fused_ordering(834) 00:13:28.731 fused_ordering(835) 00:13:28.731 fused_ordering(836) 00:13:28.731 fused_ordering(837) 00:13:28.731 fused_ordering(838) 00:13:28.731 fused_ordering(839) 00:13:28.731 fused_ordering(840) 00:13:28.731 fused_ordering(841) 00:13:28.731 fused_ordering(842) 00:13:28.731 fused_ordering(843) 00:13:28.731 fused_ordering(844) 00:13:28.731 fused_ordering(845) 00:13:28.731 fused_ordering(846) 00:13:28.731 fused_ordering(847) 00:13:28.731 fused_ordering(848) 00:13:28.731 fused_ordering(849) 00:13:28.731 fused_ordering(850) 00:13:28.731 fused_ordering(851) 00:13:28.731 fused_ordering(852) 00:13:28.731 fused_ordering(853) 00:13:28.731 fused_ordering(854) 00:13:28.731 fused_ordering(855) 00:13:28.731 fused_ordering(856) 00:13:28.731 fused_ordering(857) 00:13:28.731 fused_ordering(858) 00:13:28.731 fused_ordering(859) 00:13:28.731 fused_ordering(860) 00:13:28.731 fused_ordering(861) 00:13:28.731 fused_ordering(862) 00:13:28.731 fused_ordering(863) 00:13:28.731 fused_ordering(864) 00:13:28.731 fused_ordering(865) 00:13:28.731 fused_ordering(866) 00:13:28.731 fused_ordering(867) 00:13:28.731 fused_ordering(868) 00:13:28.731 fused_ordering(869) 00:13:28.731 fused_ordering(870) 00:13:28.731 fused_ordering(871) 00:13:28.731 fused_ordering(872) 00:13:28.731 fused_ordering(873) 00:13:28.731 fused_ordering(874) 00:13:28.731 fused_ordering(875) 00:13:28.731 fused_ordering(876) 00:13:28.731 fused_ordering(877) 00:13:28.731 fused_ordering(878) 00:13:28.731 fused_ordering(879) 00:13:28.731 fused_ordering(880) 00:13:28.731 fused_ordering(881) 00:13:28.731 fused_ordering(882) 00:13:28.731 fused_ordering(883) 00:13:28.731 fused_ordering(884) 00:13:28.731 fused_ordering(885) 00:13:28.731 fused_ordering(886) 00:13:28.731 fused_ordering(887) 00:13:28.731 fused_ordering(888) 00:13:28.731 fused_ordering(889) 00:13:28.731 fused_ordering(890) 00:13:28.731 fused_ordering(891) 00:13:28.731 fused_ordering(892) 00:13:28.731 fused_ordering(893) 00:13:28.731 fused_ordering(894) 00:13:28.731 fused_ordering(895) 00:13:28.731 fused_ordering(896) 00:13:28.731 fused_ordering(897) 00:13:28.731 fused_ordering(898) 00:13:28.731 fused_ordering(899) 00:13:28.731 fused_ordering(900) 00:13:28.731 fused_ordering(901) 00:13:28.731 fused_ordering(902) 00:13:28.731 fused_ordering(903) 00:13:28.731 fused_ordering(904) 00:13:28.731 fused_ordering(905) 00:13:28.731 fused_ordering(906) 00:13:28.731 fused_ordering(907) 00:13:28.731 fused_ordering(908) 00:13:28.731 fused_ordering(909) 00:13:28.731 fused_ordering(910) 00:13:28.731 fused_ordering(911) 00:13:28.731 fused_ordering(912) 00:13:28.731 fused_ordering(913) 00:13:28.731 fused_ordering(914) 00:13:28.731 fused_ordering(915) 00:13:28.731 fused_ordering(916) 00:13:28.731 fused_ordering(917) 00:13:28.731 fused_ordering(918) 00:13:28.731 fused_ordering(919) 00:13:28.731 fused_ordering(920) 00:13:28.731 fused_ordering(921) 00:13:28.731 fused_ordering(922) 00:13:28.731 fused_ordering(923) 00:13:28.731 fused_ordering(924) 00:13:28.731 fused_ordering(925) 00:13:28.731 fused_ordering(926) 00:13:28.731 fused_ordering(927) 00:13:28.731 fused_ordering(928) 00:13:28.731 fused_ordering(929) 00:13:28.731 fused_ordering(930) 00:13:28.731 fused_ordering(931) 00:13:28.731 fused_ordering(932) 00:13:28.731 fused_ordering(933) 00:13:28.731 fused_ordering(934) 00:13:28.731 fused_ordering(935) 00:13:28.731 fused_ordering(936) 00:13:28.732 fused_ordering(937) 00:13:28.732 fused_ordering(938) 00:13:28.732 fused_ordering(939) 00:13:28.732 fused_ordering(940) 00:13:28.732 fused_ordering(941) 00:13:28.732 fused_ordering(942) 00:13:28.732 fused_ordering(943) 00:13:28.732 fused_ordering(944) 00:13:28.732 fused_ordering(945) 00:13:28.732 fused_ordering(946) 00:13:28.732 fused_ordering(947) 00:13:28.732 fused_ordering(948) 00:13:28.732 fused_ordering(949) 00:13:28.732 fused_ordering(950) 00:13:28.732 fused_ordering(951) 00:13:28.732 fused_ordering(952) 00:13:28.732 fused_ordering(953) 00:13:28.732 fused_ordering(954) 00:13:28.732 fused_ordering(955) 00:13:28.732 fused_ordering(956) 00:13:28.732 fused_ordering(957) 00:13:28.732 fused_ordering(958) 00:13:28.732 fused_ordering(959) 00:13:28.732 fused_ordering(960) 00:13:28.732 fused_ordering(961) 00:13:28.732 fused_ordering(962) 00:13:28.732 fused_ordering(963) 00:13:28.732 fused_ordering(964) 00:13:28.732 fused_ordering(965) 00:13:28.732 fused_ordering(966) 00:13:28.732 fused_ordering(967) 00:13:28.732 fused_ordering(968) 00:13:28.732 fused_ordering(969) 00:13:28.732 fused_ordering(970) 00:13:28.732 fused_ordering(971) 00:13:28.732 fused_ordering(972) 00:13:28.732 fused_ordering(973) 00:13:28.732 fused_ordering(974) 00:13:28.732 fused_ordering(975) 00:13:28.732 fused_ordering(976) 00:13:28.732 fused_ordering(977) 00:13:28.732 fused_ordering(978) 00:13:28.732 fused_ordering(979) 00:13:28.732 fused_ordering(980) 00:13:28.732 fused_ordering(981) 00:13:28.732 fused_ordering(982) 00:13:28.732 fused_ordering(983) 00:13:28.732 fused_ordering(984) 00:13:28.732 fused_ordering(985) 00:13:28.732 fused_ordering(986) 00:13:28.732 fused_ordering(987) 00:13:28.732 fused_ordering(988) 00:13:28.732 fused_ordering(989) 00:13:28.732 fused_ordering(990) 00:13:28.732 fused_ordering(991) 00:13:28.732 fused_ordering(992) 00:13:28.732 fused_ordering(993) 00:13:28.732 fused_ordering(994) 00:13:28.732 fused_ordering(995) 00:13:28.732 fused_ordering(996) 00:13:28.732 fused_ordering(997) 00:13:28.732 fused_ordering(998) 00:13:28.732 fused_ordering(999) 00:13:28.732 fused_ordering(1000) 00:13:28.732 fused_ordering(1001) 00:13:28.732 fused_ordering(1002) 00:13:28.732 fused_ordering(1003) 00:13:28.732 fused_ordering(1004) 00:13:28.732 fused_ordering(1005) 00:13:28.732 fused_ordering(1006) 00:13:28.732 fused_ordering(1007) 00:13:28.732 fused_ordering(1008) 00:13:28.732 fused_ordering(1009) 00:13:28.732 fused_ordering(1010) 00:13:28.732 fused_ordering(1011) 00:13:28.732 fused_ordering(1012) 00:13:28.732 fused_ordering(1013) 00:13:28.732 fused_ordering(1014) 00:13:28.732 fused_ordering(1015) 00:13:28.732 fused_ordering(1016) 00:13:28.732 fused_ordering(1017) 00:13:28.732 fused_ordering(1018) 00:13:28.732 fused_ordering(1019) 00:13:28.732 fused_ordering(1020) 00:13:28.732 fused_ordering(1021) 00:13:28.732 fused_ordering(1022) 00:13:28.732 fused_ordering(1023) 00:13:28.732 02:15:28 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.732 02:15:28 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.732 02:15:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.732 02:15:28 -- nvmf/common.sh@116 -- # sync 00:13:28.732 02:15:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.732 02:15:28 -- nvmf/common.sh@119 -- # set +e 00:13:28.732 02:15:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.732 02:15:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.732 rmmod nvme_tcp 00:13:28.732 rmmod nvme_fabrics 00:13:28.732 rmmod nvme_keyring 00:13:28.732 02:15:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:28.732 02:15:28 -- nvmf/common.sh@123 -- # set -e 00:13:28.732 02:15:28 -- nvmf/common.sh@124 -- # return 0 00:13:28.732 02:15:28 -- nvmf/common.sh@477 -- # '[' -n 81407 ']' 00:13:28.732 02:15:28 -- nvmf/common.sh@478 -- # killprocess 81407 00:13:28.732 02:15:28 -- common/autotest_common.sh@926 -- # '[' -z 81407 ']' 00:13:28.732 02:15:28 -- common/autotest_common.sh@930 -- # kill -0 81407 00:13:28.732 02:15:28 -- common/autotest_common.sh@931 -- # uname 00:13:28.732 02:15:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.732 02:15:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81407 00:13:28.732 killing process with pid 81407 00:13:28.732 02:15:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:28.732 02:15:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:28.732 02:15:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81407' 00:13:28.732 02:15:28 -- common/autotest_common.sh@945 -- # kill 81407 00:13:28.732 02:15:28 -- common/autotest_common.sh@950 -- # wait 81407 00:13:28.991 02:15:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.991 02:15:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.991 02:15:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.991 02:15:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.991 02:15:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.991 02:15:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.991 02:15:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.991 02:15:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.991 02:15:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:28.991 00:13:28.991 real 0m3.852s 00:13:28.991 user 0m4.563s 00:13:28.991 sys 0m1.343s 00:13:28.991 02:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.991 02:15:28 -- common/autotest_common.sh@10 -- # set +x 00:13:28.991 ************************************ 00:13:28.991 END TEST nvmf_fused_ordering 00:13:28.991 ************************************ 00:13:29.249 02:15:28 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.249 02:15:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:29.249 02:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.249 02:15:28 -- common/autotest_common.sh@10 -- # set +x 00:13:29.249 ************************************ 00:13:29.249 START TEST nvmf_delete_subsystem 00:13:29.249 ************************************ 00:13:29.249 02:15:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:29.249 * Looking for test storage... 00:13:29.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:29.249 02:15:28 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:29.249 02:15:28 -- nvmf/common.sh@7 -- # uname -s 00:13:29.249 02:15:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.249 02:15:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.249 02:15:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.249 02:15:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.249 02:15:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.249 02:15:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.249 02:15:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.249 02:15:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.249 02:15:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.249 02:15:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.249 02:15:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:29.249 02:15:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:29.249 02:15:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.249 02:15:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.249 02:15:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:29.250 02:15:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:29.250 02:15:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.250 02:15:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.250 02:15:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.250 02:15:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.250 02:15:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.250 02:15:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.250 02:15:28 -- paths/export.sh@5 -- # export PATH 00:13:29.250 02:15:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.250 02:15:28 -- nvmf/common.sh@46 -- # : 0 00:13:29.250 02:15:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.250 02:15:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.250 02:15:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.250 02:15:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.250 02:15:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.250 02:15:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.250 02:15:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.250 02:15:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.250 02:15:28 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:29.250 02:15:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.250 02:15:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.250 02:15:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.250 02:15:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.250 02:15:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.250 02:15:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.250 02:15:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.250 02:15:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.250 02:15:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:29.250 02:15:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:29.250 02:15:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:29.250 02:15:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:29.250 02:15:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:29.250 02:15:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:29.250 02:15:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.250 02:15:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.250 02:15:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:29.250 02:15:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:29.250 02:15:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:29.250 02:15:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:29.250 02:15:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:29.250 02:15:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.250 02:15:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:29.250 02:15:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:29.250 02:15:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:29.250 02:15:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:29.250 02:15:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:29.250 02:15:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:29.250 Cannot find device "nvmf_tgt_br" 00:13:29.250 02:15:28 -- nvmf/common.sh@154 -- # true 00:13:29.250 02:15:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:29.250 Cannot find device "nvmf_tgt_br2" 00:13:29.250 02:15:28 -- nvmf/common.sh@155 -- # true 00:13:29.250 02:15:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:29.250 02:15:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:29.250 Cannot find device "nvmf_tgt_br" 00:13:29.250 02:15:28 -- nvmf/common.sh@157 -- # true 00:13:29.250 02:15:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:29.250 Cannot find device "nvmf_tgt_br2" 00:13:29.250 02:15:28 -- nvmf/common.sh@158 -- # true 00:13:29.250 02:15:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:29.250 02:15:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:29.509 02:15:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:29.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.509 02:15:28 -- nvmf/common.sh@161 -- # true 00:13:29.509 02:15:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:29.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:29.509 02:15:28 -- nvmf/common.sh@162 -- # true 00:13:29.509 02:15:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:29.509 02:15:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:29.509 02:15:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:29.509 02:15:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:29.509 02:15:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:29.509 02:15:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:29.509 02:15:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:29.509 02:15:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:29.509 02:15:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:29.509 02:15:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:29.509 02:15:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:29.509 02:15:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:29.509 02:15:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:29.509 02:15:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:29.509 02:15:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:29.509 02:15:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:29.509 02:15:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:29.509 02:15:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:29.509 02:15:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:29.509 02:15:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:29.509 02:15:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:29.509 02:15:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:29.509 02:15:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:29.509 02:15:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:29.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:13:29.509 00:13:29.509 --- 10.0.0.2 ping statistics --- 00:13:29.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.509 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:29.509 02:15:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:29.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:29.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:13:29.509 00:13:29.509 --- 10.0.0.3 ping statistics --- 00:13:29.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.509 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:13:29.509 02:15:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:29.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:29.509 00:13:29.509 --- 10.0.0.1 ping statistics --- 00:13:29.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.509 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:29.509 02:15:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.509 02:15:29 -- nvmf/common.sh@421 -- # return 0 00:13:29.509 02:15:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:29.509 02:15:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.509 02:15:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:29.509 02:15:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:29.509 02:15:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.509 02:15:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:29.509 02:15:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:29.509 02:15:29 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:29.509 02:15:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:29.509 02:15:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:29.509 02:15:29 -- common/autotest_common.sh@10 -- # set +x 00:13:29.509 02:15:29 -- nvmf/common.sh@469 -- # nvmfpid=81663 00:13:29.509 02:15:29 -- nvmf/common.sh@470 -- # waitforlisten 81663 00:13:29.509 02:15:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:29.509 02:15:29 -- common/autotest_common.sh@819 -- # '[' -z 81663 ']' 00:13:29.509 02:15:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.509 02:15:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:29.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.509 02:15:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.509 02:15:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:29.509 02:15:29 -- common/autotest_common.sh@10 -- # set +x 00:13:29.767 [2024-07-15 02:15:29.086022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:29.767 [2024-07-15 02:15:29.086097] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.767 [2024-07-15 02:15:29.226493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:29.768 [2024-07-15 02:15:29.303751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.768 [2024-07-15 02:15:29.303898] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.768 [2024-07-15 02:15:29.303911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.768 [2024-07-15 02:15:29.303921] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.768 [2024-07-15 02:15:29.304073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.768 [2024-07-15 02:15:29.304274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.702 02:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:30.702 02:15:29 -- common/autotest_common.sh@852 -- # return 0 00:13:30.702 02:15:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:30.702 02:15:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:30.702 02:15:29 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 02:15:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 [2024-07-15 02:15:30.041042] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 [2024-07-15 02:15:30.057162] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 NULL1 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 Delay0 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.702 02:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.702 02:15:30 -- common/autotest_common.sh@10 -- # set +x 00:13:30.702 02:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@28 -- # perf_pid=81714 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:30.702 02:15:30 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:30.702 [2024-07-15 02:15:30.251843] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:32.605 02:15:32 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.605 02:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.605 02:15:32 -- common/autotest_common.sh@10 -- # set +x 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 starting I/O failed: -6 00:13:32.865 [2024-07-15 02:15:32.286032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff1a8000c00 is same with the state(5) to be set 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Write completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.865 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 starting I/O failed: -6 00:13:32.866 [2024-07-15 02:15:32.288243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2035ef0 is same with the state(5) to be set 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Write completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 Read completed with error (sct=0, sc=8) 00:13:32.866 [2024-07-15 02:15:32.288748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2039720 is same with the state(5) to be set 00:13:33.864 [2024-07-15 02:15:33.269661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203b8c0 is same with the state(5) to be set 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 [2024-07-15 02:15:33.283930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20399d0 is same with the state(5) to be set 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 [2024-07-15 02:15:33.289068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2039350 is same with the state(5) to be set 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 [2024-07-15 02:15:33.289882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff1a800bf20 is same with the state(5) to be set 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Write completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 Read completed with error (sct=0, sc=8) 00:13:33.864 [2024-07-15 02:15:33.290663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff1a800c480 is same with the state(5) to be set 00:13:33.864 [2024-07-15 02:15:33.291334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203b8c0 (9): Bad file descriptor 00:13:33.864 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:33.864 02:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.864 02:15:33 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:33.864 02:15:33 -- target/delete_subsystem.sh@35 -- # kill -0 81714 00:13:33.864 02:15:33 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:33.864 Initializing NVMe Controllers 00:13:33.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.864 Controller IO queue size 128, less than required. 00:13:33.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:33.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:33.864 Initialization complete. Launching workers. 00:13:33.864 ======================================================== 00:13:33.864 Latency(us) 00:13:33.864 Device Information : IOPS MiB/s Average min max 00:13:33.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 144.91 0.07 982447.24 523.98 2004340.69 00:13:33.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.77 0.08 908738.85 1846.83 1010644.76 00:13:33.864 ======================================================== 00:13:33.864 Total : 308.68 0.15 943341.50 523.98 2004340.69 00:13:33.864 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@35 -- # kill -0 81714 00:13:34.427 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (81714) - No such process 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@45 -- # NOT wait 81714 00:13:34.427 02:15:33 -- common/autotest_common.sh@640 -- # local es=0 00:13:34.427 02:15:33 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 81714 00:13:34.427 02:15:33 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:34.427 02:15:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:34.427 02:15:33 -- common/autotest_common.sh@632 -- # type -t wait 00:13:34.427 02:15:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:34.427 02:15:33 -- common/autotest_common.sh@643 -- # wait 81714 00:13:34.427 02:15:33 -- common/autotest_common.sh@643 -- # es=1 00:13:34.427 02:15:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:34.427 02:15:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:34.427 02:15:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.427 02:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.427 02:15:33 -- common/autotest_common.sh@10 -- # set +x 00:13:34.427 02:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.427 02:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.427 02:15:33 -- common/autotest_common.sh@10 -- # set +x 00:13:34.427 [2024-07-15 02:15:33.815998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.427 02:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.427 02:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.427 02:15:33 -- common/autotest_common.sh@10 -- # set +x 00:13:34.427 02:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@54 -- # perf_pid=81761 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:34.427 02:15:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:34.685 [2024-07-15 02:15:33.985143] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:34.942 02:15:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:34.942 02:15:34 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:34.942 02:15:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:35.507 02:15:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:35.507 02:15:34 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:35.507 02:15:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.072 02:15:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.072 02:15:35 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:36.072 02:15:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.330 02:15:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.330 02:15:35 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:36.330 02:15:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:36.897 02:15:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:36.897 02:15:36 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:36.897 02:15:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.466 02:15:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.466 02:15:36 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:37.466 02:15:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:37.725 Initializing NVMe Controllers 00:13:37.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.725 Controller IO queue size 128, less than required. 00:13:37.725 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:37.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:37.726 Initialization complete. Launching workers. 00:13:37.726 ======================================================== 00:13:37.726 Latency(us) 00:13:37.726 Device Information : IOPS MiB/s Average min max 00:13:37.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002840.83 1000176.86 1009508.55 00:13:37.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005274.75 1000230.83 1013088.25 00:13:37.726 ======================================================== 00:13:37.726 Total : 256.00 0.12 1004057.79 1000176.86 1013088.25 00:13:37.726 00:13:37.984 02:15:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:37.984 02:15:37 -- target/delete_subsystem.sh@57 -- # kill -0 81761 00:13:37.984 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (81761) - No such process 00:13:37.984 02:15:37 -- target/delete_subsystem.sh@67 -- # wait 81761 00:13:37.984 02:15:37 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:37.984 02:15:37 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:37.984 02:15:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:37.984 02:15:37 -- nvmf/common.sh@116 -- # sync 00:13:37.984 02:15:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:37.984 02:15:37 -- nvmf/common.sh@119 -- # set +e 00:13:37.984 02:15:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:37.984 02:15:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:37.984 rmmod nvme_tcp 00:13:37.984 rmmod nvme_fabrics 00:13:37.984 rmmod nvme_keyring 00:13:37.984 02:15:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:37.984 02:15:37 -- nvmf/common.sh@123 -- # set -e 00:13:37.984 02:15:37 -- nvmf/common.sh@124 -- # return 0 00:13:37.984 02:15:37 -- nvmf/common.sh@477 -- # '[' -n 81663 ']' 00:13:37.984 02:15:37 -- nvmf/common.sh@478 -- # killprocess 81663 00:13:37.984 02:15:37 -- common/autotest_common.sh@926 -- # '[' -z 81663 ']' 00:13:37.984 02:15:37 -- common/autotest_common.sh@930 -- # kill -0 81663 00:13:37.984 02:15:37 -- common/autotest_common.sh@931 -- # uname 00:13:37.984 02:15:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:37.984 02:15:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81663 00:13:37.984 killing process with pid 81663 00:13:37.984 02:15:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:37.984 02:15:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:37.984 02:15:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81663' 00:13:37.984 02:15:37 -- common/autotest_common.sh@945 -- # kill 81663 00:13:37.984 02:15:37 -- common/autotest_common.sh@950 -- # wait 81663 00:13:38.243 02:15:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:38.243 02:15:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:38.243 02:15:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:38.243 02:15:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.243 02:15:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:38.243 02:15:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.243 02:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.243 02:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.243 02:15:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:38.243 ************************************ 00:13:38.243 END TEST nvmf_delete_subsystem 00:13:38.243 ************************************ 00:13:38.243 00:13:38.243 real 0m9.173s 00:13:38.243 user 0m28.412s 00:13:38.243 sys 0m1.494s 00:13:38.243 02:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.243 02:15:37 -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 02:15:37 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:38.243 02:15:37 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:38.243 02:15:37 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:38.243 02:15:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.243 02:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.243 02:15:37 -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 ************************************ 00:13:38.243 START TEST nvmf_host_management 00:13:38.243 ************************************ 00:13:38.243 02:15:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:38.502 * Looking for test storage... 00:13:38.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:38.502 02:15:37 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.502 02:15:37 -- nvmf/common.sh@7 -- # uname -s 00:13:38.502 02:15:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.502 02:15:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.502 02:15:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.502 02:15:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.502 02:15:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.502 02:15:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.502 02:15:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.502 02:15:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.502 02:15:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.502 02:15:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.502 02:15:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:38.502 02:15:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:38.502 02:15:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.502 02:15:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.502 02:15:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.502 02:15:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.502 02:15:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.502 02:15:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.502 02:15:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.502 02:15:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.502 02:15:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.502 02:15:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.502 02:15:37 -- paths/export.sh@5 -- # export PATH 00:13:38.502 02:15:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.502 02:15:37 -- nvmf/common.sh@46 -- # : 0 00:13:38.502 02:15:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.502 02:15:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.502 02:15:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.502 02:15:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.502 02:15:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.502 02:15:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.502 02:15:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.502 02:15:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.502 02:15:37 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.502 02:15:37 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.502 02:15:37 -- target/host_management.sh@104 -- # nvmftestinit 00:13:38.502 02:15:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:38.502 02:15:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.502 02:15:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:38.502 02:15:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:38.503 02:15:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:38.503 02:15:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.503 02:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.503 02:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.503 02:15:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:38.503 02:15:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:38.503 02:15:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:38.503 02:15:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:38.503 02:15:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:38.503 02:15:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:38.503 02:15:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.503 02:15:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.503 02:15:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:38.503 02:15:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:38.503 02:15:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:38.503 02:15:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:38.503 02:15:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:38.503 02:15:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.503 02:15:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:38.503 02:15:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:38.503 02:15:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:38.503 02:15:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:38.503 02:15:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:38.503 02:15:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:38.503 Cannot find device "nvmf_tgt_br" 00:13:38.503 02:15:37 -- nvmf/common.sh@154 -- # true 00:13:38.503 02:15:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:38.503 Cannot find device "nvmf_tgt_br2" 00:13:38.503 02:15:37 -- nvmf/common.sh@155 -- # true 00:13:38.503 02:15:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:38.503 02:15:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:38.503 Cannot find device "nvmf_tgt_br" 00:13:38.503 02:15:37 -- nvmf/common.sh@157 -- # true 00:13:38.503 02:15:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:38.503 Cannot find device "nvmf_tgt_br2" 00:13:38.503 02:15:37 -- nvmf/common.sh@158 -- # true 00:13:38.503 02:15:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:38.503 02:15:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:38.503 02:15:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:38.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.503 02:15:38 -- nvmf/common.sh@161 -- # true 00:13:38.503 02:15:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:38.503 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:38.503 02:15:38 -- nvmf/common.sh@162 -- # true 00:13:38.503 02:15:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:38.503 02:15:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:38.503 02:15:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:38.503 02:15:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:38.503 02:15:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:38.761 02:15:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:38.761 02:15:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:38.761 02:15:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:38.761 02:15:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:38.761 02:15:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:38.761 02:15:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:38.761 02:15:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:38.761 02:15:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:38.761 02:15:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:38.761 02:15:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:38.761 02:15:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:38.761 02:15:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:38.761 02:15:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:38.761 02:15:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:38.761 02:15:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:38.761 02:15:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:38.761 02:15:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:38.761 02:15:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:38.761 02:15:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:38.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:38.761 00:13:38.761 --- 10.0.0.2 ping statistics --- 00:13:38.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.761 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:38.761 02:15:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:38.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:38.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:13:38.761 00:13:38.761 --- 10.0.0.3 ping statistics --- 00:13:38.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.761 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:38.761 02:15:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:38.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:38.761 00:13:38.761 --- 10.0.0.1 ping statistics --- 00:13:38.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.761 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:38.761 02:15:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.761 02:15:38 -- nvmf/common.sh@421 -- # return 0 00:13:38.761 02:15:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:38.761 02:15:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.761 02:15:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:38.761 02:15:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:38.761 02:15:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.761 02:15:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:38.761 02:15:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:38.761 02:15:38 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:38.761 02:15:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:38.761 02:15:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.761 02:15:38 -- common/autotest_common.sh@10 -- # set +x 00:13:38.761 ************************************ 00:13:38.761 START TEST nvmf_host_management 00:13:38.761 ************************************ 00:13:38.761 02:15:38 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:38.761 02:15:38 -- target/host_management.sh@69 -- # starttarget 00:13:38.761 02:15:38 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:38.761 02:15:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:38.761 02:15:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:38.761 02:15:38 -- common/autotest_common.sh@10 -- # set +x 00:13:38.761 02:15:38 -- nvmf/common.sh@469 -- # nvmfpid=81989 00:13:38.761 02:15:38 -- nvmf/common.sh@470 -- # waitforlisten 81989 00:13:38.761 02:15:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:38.761 02:15:38 -- common/autotest_common.sh@819 -- # '[' -z 81989 ']' 00:13:38.761 02:15:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.761 02:15:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.761 02:15:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.761 02:15:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.761 02:15:38 -- common/autotest_common.sh@10 -- # set +x 00:13:39.019 [2024-07-15 02:15:38.329056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:39.019 [2024-07-15 02:15:38.329167] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.019 [2024-07-15 02:15:38.468861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.019 [2024-07-15 02:15:38.559953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:39.019 [2024-07-15 02:15:38.560334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.019 [2024-07-15 02:15:38.560383] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.019 [2024-07-15 02:15:38.560526] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.019 [2024-07-15 02:15:38.560802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.019 [2024-07-15 02:15:38.561353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.019 [2024-07-15 02:15:38.561517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.019 [2024-07-15 02:15:38.561840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.956 02:15:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.956 02:15:39 -- common/autotest_common.sh@852 -- # return 0 00:13:39.956 02:15:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:39.956 02:15:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 02:15:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.956 02:15:39 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.956 02:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 [2024-07-15 02:15:39.310681] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.956 02:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.956 02:15:39 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:39.956 02:15:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 02:15:39 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:39.956 02:15:39 -- target/host_management.sh@23 -- # cat 00:13:39.956 02:15:39 -- target/host_management.sh@30 -- # rpc_cmd 00:13:39.956 02:15:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 Malloc0 00:13:39.956 [2024-07-15 02:15:39.386357] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.956 02:15:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.956 02:15:39 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:39.956 02:15:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 02:15:39 -- target/host_management.sh@73 -- # perfpid=82065 00:13:39.956 02:15:39 -- target/host_management.sh@74 -- # waitforlisten 82065 /var/tmp/bdevperf.sock 00:13:39.956 02:15:39 -- common/autotest_common.sh@819 -- # '[' -z 82065 ']' 00:13:39.956 02:15:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.956 02:15:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:39.956 02:15:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.956 02:15:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:39.956 02:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:39.956 02:15:39 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:39.956 02:15:39 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:39.956 02:15:39 -- nvmf/common.sh@520 -- # config=() 00:13:39.956 02:15:39 -- nvmf/common.sh@520 -- # local subsystem config 00:13:39.956 02:15:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:39.956 02:15:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:39.956 { 00:13:39.956 "params": { 00:13:39.956 "name": "Nvme$subsystem", 00:13:39.956 "trtype": "$TEST_TRANSPORT", 00:13:39.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:39.956 "adrfam": "ipv4", 00:13:39.956 "trsvcid": "$NVMF_PORT", 00:13:39.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:39.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:39.956 "hdgst": ${hdgst:-false}, 00:13:39.956 "ddgst": ${ddgst:-false} 00:13:39.956 }, 00:13:39.956 "method": "bdev_nvme_attach_controller" 00:13:39.956 } 00:13:39.956 EOF 00:13:39.956 )") 00:13:39.956 02:15:39 -- nvmf/common.sh@542 -- # cat 00:13:39.956 02:15:39 -- nvmf/common.sh@544 -- # jq . 00:13:39.956 02:15:39 -- nvmf/common.sh@545 -- # IFS=, 00:13:39.956 02:15:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:39.956 "params": { 00:13:39.956 "name": "Nvme0", 00:13:39.956 "trtype": "tcp", 00:13:39.956 "traddr": "10.0.0.2", 00:13:39.956 "adrfam": "ipv4", 00:13:39.956 "trsvcid": "4420", 00:13:39.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:39.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:39.956 "hdgst": false, 00:13:39.956 "ddgst": false 00:13:39.956 }, 00:13:39.956 "method": "bdev_nvme_attach_controller" 00:13:39.956 }' 00:13:39.956 [2024-07-15 02:15:39.494905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:39.956 [2024-07-15 02:15:39.495018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82065 ] 00:13:40.215 [2024-07-15 02:15:39.629722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.215 [2024-07-15 02:15:39.726289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.475 Running I/O for 10 seconds... 00:13:41.044 02:15:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.044 02:15:40 -- common/autotest_common.sh@852 -- # return 0 00:13:41.044 02:15:40 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:41.044 02:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.044 02:15:40 -- common/autotest_common.sh@10 -- # set +x 00:13:41.044 02:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.044 02:15:40 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.044 02:15:40 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:41.044 02:15:40 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:41.044 02:15:40 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:41.044 02:15:40 -- target/host_management.sh@52 -- # local ret=1 00:13:41.044 02:15:40 -- target/host_management.sh@53 -- # local i 00:13:41.044 02:15:40 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:41.044 02:15:40 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:41.044 02:15:40 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:41.044 02:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.044 02:15:40 -- common/autotest_common.sh@10 -- # set +x 00:13:41.044 02:15:40 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:41.044 02:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.044 02:15:40 -- target/host_management.sh@55 -- # read_io_count=2009 00:13:41.044 02:15:40 -- target/host_management.sh@58 -- # '[' 2009 -ge 100 ']' 00:13:41.044 02:15:40 -- target/host_management.sh@59 -- # ret=0 00:13:41.044 02:15:40 -- target/host_management.sh@60 -- # break 00:13:41.044 02:15:40 -- target/host_management.sh@64 -- # return 0 00:13:41.044 02:15:40 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.044 02:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.044 02:15:40 -- common/autotest_common.sh@10 -- # set +x 00:13:41.044 [2024-07-15 02:15:40.568594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.568998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831d00 is same with the state(5) to be set 00:13:41.044 [2024-07-15 02:15:40.569295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.044 [2024-07-15 02:15:40.569473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.044 [2024-07-15 02:15:40.569481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.569969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.569980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.045 [2024-07-15 02:15:40.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.045 [2024-07-15 02:15:40.570451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.046 [2024-07-15 02:15:40.570776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.570786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192ae30 is same with the state(5) to be set 00:13:41.046 [2024-07-15 02:15:40.570852] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x192ae30 was disconnected and freed. reset controller. 00:13:41.046 [2024-07-15 02:15:40.572076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:41.046 task offset: 20224 on job bdev=Nvme0n1 fails 00:13:41.046 00:13:41.046 Latency(us) 00:13:41.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.046 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:41.046 Job: Nvme0n1 ended in about 0.67 seconds with error 00:13:41.046 Verification LBA range: start 0x0 length 0x400 00:13:41.046 Nvme0n1 : 0.67 3255.69 203.48 95.19 0.00 18795.96 2725.70 26214.40 00:13:41.046 =================================================================================================================== 00:13:41.046 Total : 3255.69 203.48 95.19 0.00 18795.96 2725.70 26214.40 00:13:41.046 [2024-07-15 02:15:40.574163] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.046 [2024-07-15 02:15:40.574190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19007f0 (9): Bad file descriptor 00:13:41.046 02:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.046 02:15:40 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.046 02:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.046 02:15:40 -- common/autotest_common.sh@10 -- # set +x 00:13:41.046 [2024-07-15 02:15:40.577261] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:41.046 [2024-07-15 02:15:40.577448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:41.046 [2024-07-15 02:15:40.577475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.046 [2024-07-15 02:15:40.577491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:41.046 [2024-07-15 02:15:40.577502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:41.046 [2024-07-15 02:15:40.577510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:41.046 [2024-07-15 02:15:40.577518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19007f0 00:13:41.046 [2024-07-15 02:15:40.577551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19007f0 (9): Bad file descriptor 00:13:41.046 [2024-07-15 02:15:40.577568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:41.046 [2024-07-15 02:15:40.577577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:41.046 [2024-07-15 02:15:40.577586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:41.046 [2024-07-15 02:15:40.577631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:41.046 02:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.046 02:15:40 -- target/host_management.sh@87 -- # sleep 1 00:13:42.432 02:15:41 -- target/host_management.sh@91 -- # kill -9 82065 00:13:42.432 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82065) - No such process 00:13:42.432 02:15:41 -- target/host_management.sh@91 -- # true 00:13:42.432 02:15:41 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:42.432 02:15:41 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:42.432 02:15:41 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:42.432 02:15:41 -- nvmf/common.sh@520 -- # config=() 00:13:42.432 02:15:41 -- nvmf/common.sh@520 -- # local subsystem config 00:13:42.432 02:15:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:42.432 02:15:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:42.432 { 00:13:42.432 "params": { 00:13:42.432 "name": "Nvme$subsystem", 00:13:42.432 "trtype": "$TEST_TRANSPORT", 00:13:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.432 "adrfam": "ipv4", 00:13:42.432 "trsvcid": "$NVMF_PORT", 00:13:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.432 "hdgst": ${hdgst:-false}, 00:13:42.432 "ddgst": ${ddgst:-false} 00:13:42.432 }, 00:13:42.432 "method": "bdev_nvme_attach_controller" 00:13:42.432 } 00:13:42.432 EOF 00:13:42.432 )") 00:13:42.432 02:15:41 -- nvmf/common.sh@542 -- # cat 00:13:42.432 02:15:41 -- nvmf/common.sh@544 -- # jq . 00:13:42.432 02:15:41 -- nvmf/common.sh@545 -- # IFS=, 00:13:42.432 02:15:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:42.432 "params": { 00:13:42.432 "name": "Nvme0", 00:13:42.432 "trtype": "tcp", 00:13:42.432 "traddr": "10.0.0.2", 00:13:42.432 "adrfam": "ipv4", 00:13:42.432 "trsvcid": "4420", 00:13:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.432 "hdgst": false, 00:13:42.432 "ddgst": false 00:13:42.432 }, 00:13:42.432 "method": "bdev_nvme_attach_controller" 00:13:42.432 }' 00:13:42.432 [2024-07-15 02:15:41.660292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:42.432 [2024-07-15 02:15:41.660405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82111 ] 00:13:42.432 [2024-07-15 02:15:41.799660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.432 [2024-07-15 02:15:41.889921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.691 Running I/O for 1 seconds... 00:13:43.627 00:13:43.627 Latency(us) 00:13:43.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.627 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.627 Verification LBA range: start 0x0 length 0x400 00:13:43.627 Nvme0n1 : 1.01 3534.44 220.90 0.00 0.00 17800.48 1131.99 24188.74 00:13:43.627 =================================================================================================================== 00:13:43.627 Total : 3534.44 220.90 0.00 0.00 17800.48 1131.99 24188.74 00:13:43.884 02:15:43 -- target/host_management.sh@101 -- # stoptarget 00:13:43.884 02:15:43 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:43.884 02:15:43 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:43.884 02:15:43 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:43.884 02:15:43 -- target/host_management.sh@40 -- # nvmftestfini 00:13:43.884 02:15:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:43.884 02:15:43 -- nvmf/common.sh@116 -- # sync 00:13:43.884 02:15:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:43.884 02:15:43 -- nvmf/common.sh@119 -- # set +e 00:13:43.884 02:15:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:43.884 02:15:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:43.884 rmmod nvme_tcp 00:13:43.884 rmmod nvme_fabrics 00:13:43.884 rmmod nvme_keyring 00:13:43.884 02:15:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:43.884 02:15:43 -- nvmf/common.sh@123 -- # set -e 00:13:43.884 02:15:43 -- nvmf/common.sh@124 -- # return 0 00:13:43.884 02:15:43 -- nvmf/common.sh@477 -- # '[' -n 81989 ']' 00:13:43.884 02:15:43 -- nvmf/common.sh@478 -- # killprocess 81989 00:13:43.884 02:15:43 -- common/autotest_common.sh@926 -- # '[' -z 81989 ']' 00:13:43.884 02:15:43 -- common/autotest_common.sh@930 -- # kill -0 81989 00:13:43.884 02:15:43 -- common/autotest_common.sh@931 -- # uname 00:13:43.884 02:15:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:43.884 02:15:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81989 00:13:43.884 02:15:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:44.142 02:15:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:44.142 killing process with pid 81989 00:13:44.142 02:15:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81989' 00:13:44.142 02:15:43 -- common/autotest_common.sh@945 -- # kill 81989 00:13:44.142 02:15:43 -- common/autotest_common.sh@950 -- # wait 81989 00:13:44.142 [2024-07-15 02:15:43.668493] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:44.142 02:15:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.142 02:15:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.142 02:15:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.142 02:15:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.142 02:15:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.142 02:15:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.142 02:15:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.142 02:15:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.401 02:15:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:44.401 00:13:44.401 real 0m5.474s 00:13:44.401 user 0m22.688s 00:13:44.401 sys 0m1.434s 00:13:44.401 02:15:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.401 02:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:44.401 ************************************ 00:13:44.401 END TEST nvmf_host_management 00:13:44.401 ************************************ 00:13:44.401 02:15:43 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:44.401 00:13:44.401 real 0m5.992s 00:13:44.401 user 0m22.803s 00:13:44.401 sys 0m1.690s 00:13:44.401 02:15:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.401 02:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:44.401 ************************************ 00:13:44.401 END TEST nvmf_host_management 00:13:44.401 ************************************ 00:13:44.401 02:15:43 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:44.401 02:15:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:44.401 02:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.401 02:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:44.401 ************************************ 00:13:44.401 START TEST nvmf_lvol 00:13:44.401 ************************************ 00:13:44.401 02:15:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:44.401 * Looking for test storage... 00:13:44.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.401 02:15:43 -- nvmf/common.sh@7 -- # uname -s 00:13:44.401 02:15:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.401 02:15:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.401 02:15:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.401 02:15:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.401 02:15:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.401 02:15:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.401 02:15:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.401 02:15:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.401 02:15:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.401 02:15:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:44.401 02:15:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:13:44.401 02:15:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.401 02:15:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.401 02:15:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.401 02:15:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.401 02:15:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.401 02:15:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.401 02:15:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.401 02:15:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.401 02:15:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.401 02:15:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.401 02:15:43 -- paths/export.sh@5 -- # export PATH 00:13:44.401 02:15:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.401 02:15:43 -- nvmf/common.sh@46 -- # : 0 00:13:44.401 02:15:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:44.401 02:15:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:44.401 02:15:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:44.401 02:15:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.401 02:15:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.401 02:15:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:44.401 02:15:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:44.401 02:15:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:44.401 02:15:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:44.401 02:15:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:44.401 02:15:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.401 02:15:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:44.401 02:15:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:44.401 02:15:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:44.401 02:15:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.401 02:15:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.401 02:15:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.401 02:15:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:44.401 02:15:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:44.401 02:15:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.401 02:15:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.401 02:15:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.401 02:15:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:44.401 02:15:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.401 02:15:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.401 02:15:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.401 02:15:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.401 02:15:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.401 02:15:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.401 02:15:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.401 02:15:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.401 02:15:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:44.401 02:15:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:44.660 Cannot find device "nvmf_tgt_br" 00:13:44.660 02:15:43 -- nvmf/common.sh@154 -- # true 00:13:44.660 02:15:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.660 Cannot find device "nvmf_tgt_br2" 00:13:44.660 02:15:43 -- nvmf/common.sh@155 -- # true 00:13:44.660 02:15:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:44.660 02:15:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:44.660 Cannot find device "nvmf_tgt_br" 00:13:44.660 02:15:43 -- nvmf/common.sh@157 -- # true 00:13:44.660 02:15:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:44.660 Cannot find device "nvmf_tgt_br2" 00:13:44.660 02:15:43 -- nvmf/common.sh@158 -- # true 00:13:44.660 02:15:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:44.660 02:15:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:44.660 02:15:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.660 02:15:44 -- nvmf/common.sh@161 -- # true 00:13:44.660 02:15:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.660 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.660 02:15:44 -- nvmf/common.sh@162 -- # true 00:13:44.660 02:15:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.660 02:15:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.660 02:15:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.660 02:15:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.660 02:15:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.660 02:15:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.660 02:15:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.660 02:15:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.660 02:15:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.660 02:15:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:44.660 02:15:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:44.660 02:15:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:44.660 02:15:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:44.660 02:15:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.660 02:15:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.660 02:15:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.660 02:15:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:44.660 02:15:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:44.660 02:15:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.918 02:15:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.918 02:15:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.918 02:15:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.918 02:15:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.918 02:15:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:44.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:13:44.918 00:13:44.918 --- 10.0.0.2 ping statistics --- 00:13:44.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.918 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:13:44.918 02:15:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:44.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:13:44.918 00:13:44.918 --- 10.0.0.3 ping statistics --- 00:13:44.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.918 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:44.918 02:15:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:44.918 00:13:44.918 --- 10.0.0.1 ping statistics --- 00:13:44.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.918 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:44.918 02:15:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.918 02:15:44 -- nvmf/common.sh@421 -- # return 0 00:13:44.918 02:15:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.918 02:15:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.918 02:15:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.918 02:15:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.918 02:15:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.918 02:15:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.918 02:15:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.918 02:15:44 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:44.918 02:15:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.918 02:15:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.918 02:15:44 -- common/autotest_common.sh@10 -- # set +x 00:13:44.918 02:15:44 -- nvmf/common.sh@469 -- # nvmfpid=82343 00:13:44.918 02:15:44 -- nvmf/common.sh@470 -- # waitforlisten 82343 00:13:44.918 02:15:44 -- common/autotest_common.sh@819 -- # '[' -z 82343 ']' 00:13:44.918 02:15:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:44.918 02:15:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.918 02:15:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.918 02:15:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.918 02:15:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.918 02:15:44 -- common/autotest_common.sh@10 -- # set +x 00:13:44.918 [2024-07-15 02:15:44.330510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:44.918 [2024-07-15 02:15:44.330580] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.918 [2024-07-15 02:15:44.467040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.177 [2024-07-15 02:15:44.558222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:45.177 [2024-07-15 02:15:44.558390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.177 [2024-07-15 02:15:44.558403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.177 [2024-07-15 02:15:44.558412] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.177 [2024-07-15 02:15:44.558589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.177 [2024-07-15 02:15:44.558712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.177 [2024-07-15 02:15:44.558719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.758 02:15:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.758 02:15:45 -- common/autotest_common.sh@852 -- # return 0 00:13:45.758 02:15:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.758 02:15:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:45.758 02:15:45 -- common/autotest_common.sh@10 -- # set +x 00:13:45.758 02:15:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.758 02:15:45 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:46.017 [2024-07-15 02:15:45.503545] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.017 02:15:45 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.584 02:15:45 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:46.584 02:15:45 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:46.843 02:15:46 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:46.843 02:15:46 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:46.843 02:15:46 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:47.410 02:15:46 -- target/nvmf_lvol.sh@29 -- # lvs=07d4bd2b-11b8-4630-87d4-95c7ab5d43f0 00:13:47.410 02:15:46 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 07d4bd2b-11b8-4630-87d4-95c7ab5d43f0 lvol 20 00:13:47.410 02:15:46 -- target/nvmf_lvol.sh@32 -- # lvol=dd8ea341-a7aa-49d0-96b4-984027c4878b 00:13:47.410 02:15:46 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:47.669 02:15:47 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd8ea341-a7aa-49d0-96b4-984027c4878b 00:13:47.927 02:15:47 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:48.186 [2024-07-15 02:15:47.642949] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.186 02:15:47 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:48.444 02:15:47 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:48.444 02:15:47 -- target/nvmf_lvol.sh@42 -- # perf_pid=82491 00:13:48.444 02:15:47 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:49.378 02:15:48 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot dd8ea341-a7aa-49d0-96b4-984027c4878b MY_SNAPSHOT 00:13:49.944 02:15:49 -- target/nvmf_lvol.sh@47 -- # snapshot=0871d46d-d133-4509-8de5-a2df9ea38b56 00:13:49.944 02:15:49 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize dd8ea341-a7aa-49d0-96b4-984027c4878b 30 00:13:50.202 02:15:49 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0871d46d-d133-4509-8de5-a2df9ea38b56 MY_CLONE 00:13:50.460 02:15:49 -- target/nvmf_lvol.sh@49 -- # clone=917a7084-9b67-41ee-8b5a-0e25f0c3bdd2 00:13:50.460 02:15:49 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 917a7084-9b67-41ee-8b5a-0e25f0c3bdd2 00:13:51.026 02:15:50 -- target/nvmf_lvol.sh@53 -- # wait 82491 00:13:59.165 Initializing NVMe Controllers 00:13:59.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:59.165 Controller IO queue size 128, less than required. 00:13:59.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:59.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:59.165 Initialization complete. Launching workers. 00:13:59.165 ======================================================== 00:13:59.165 Latency(us) 00:13:59.165 Device Information : IOPS MiB/s Average min max 00:13:59.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10641.80 41.57 12031.49 1663.11 66383.78 00:13:59.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10767.50 42.06 11889.60 3308.52 75192.76 00:13:59.165 ======================================================== 00:13:59.165 Total : 21409.30 83.63 11960.13 1663.11 75192.76 00:13:59.165 00:13:59.165 02:15:58 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:59.165 02:15:58 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dd8ea341-a7aa-49d0-96b4-984027c4878b 00:13:59.423 02:15:58 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07d4bd2b-11b8-4630-87d4-95c7ab5d43f0 00:13:59.681 02:15:58 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:59.681 02:15:58 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:59.681 02:15:58 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:59.681 02:15:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.681 02:15:58 -- nvmf/common.sh@116 -- # sync 00:13:59.681 02:15:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:59.681 02:15:59 -- nvmf/common.sh@119 -- # set +e 00:13:59.681 02:15:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:59.681 02:15:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:59.681 rmmod nvme_tcp 00:13:59.681 rmmod nvme_fabrics 00:13:59.681 rmmod nvme_keyring 00:13:59.681 02:15:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:59.681 02:15:59 -- nvmf/common.sh@123 -- # set -e 00:13:59.681 02:15:59 -- nvmf/common.sh@124 -- # return 0 00:13:59.681 02:15:59 -- nvmf/common.sh@477 -- # '[' -n 82343 ']' 00:13:59.681 02:15:59 -- nvmf/common.sh@478 -- # killprocess 82343 00:13:59.681 02:15:59 -- common/autotest_common.sh@926 -- # '[' -z 82343 ']' 00:13:59.681 02:15:59 -- common/autotest_common.sh@930 -- # kill -0 82343 00:13:59.681 02:15:59 -- common/autotest_common.sh@931 -- # uname 00:13:59.681 02:15:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:59.681 02:15:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82343 00:13:59.681 02:15:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:59.681 02:15:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:59.682 02:15:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82343' 00:13:59.682 killing process with pid 82343 00:13:59.682 02:15:59 -- common/autotest_common.sh@945 -- # kill 82343 00:13:59.682 02:15:59 -- common/autotest_common.sh@950 -- # wait 82343 00:13:59.941 02:15:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:59.941 02:15:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:59.941 02:15:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:59.941 02:15:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.941 02:15:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:59.941 02:15:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.941 02:15:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.941 02:15:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.941 02:15:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:59.941 00:13:59.941 real 0m15.588s 00:13:59.941 user 1m4.873s 00:13:59.941 sys 0m4.182s 00:13:59.941 02:15:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.941 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.941 ************************************ 00:13:59.941 END TEST nvmf_lvol 00:13:59.941 ************************************ 00:13:59.941 02:15:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:59.941 02:15:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:59.941 02:15:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.941 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:13:59.941 ************************************ 00:13:59.941 START TEST nvmf_lvs_grow 00:13:59.941 ************************************ 00:13:59.941 02:15:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:00.200 * Looking for test storage... 00:14:00.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.200 02:15:59 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.200 02:15:59 -- nvmf/common.sh@7 -- # uname -s 00:14:00.200 02:15:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.200 02:15:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.200 02:15:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.200 02:15:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.200 02:15:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.200 02:15:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.200 02:15:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.200 02:15:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.200 02:15:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.200 02:15:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:00.200 02:15:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:00.200 02:15:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.200 02:15:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.200 02:15:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.200 02:15:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.200 02:15:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.200 02:15:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.200 02:15:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.200 02:15:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.200 02:15:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.200 02:15:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.200 02:15:59 -- paths/export.sh@5 -- # export PATH 00:14:00.200 02:15:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.200 02:15:59 -- nvmf/common.sh@46 -- # : 0 00:14:00.200 02:15:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.200 02:15:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.200 02:15:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.200 02:15:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.200 02:15:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.200 02:15:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.200 02:15:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.200 02:15:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.200 02:15:59 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.200 02:15:59 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.200 02:15:59 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:00.200 02:15:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.200 02:15:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.200 02:15:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.200 02:15:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.200 02:15:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.200 02:15:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.200 02:15:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.200 02:15:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.200 02:15:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:00.200 02:15:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:00.200 02:15:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.200 02:15:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.200 02:15:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.200 02:15:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:00.200 02:15:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.200 02:15:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.200 02:15:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.200 02:15:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.200 02:15:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.200 02:15:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.200 02:15:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.200 02:15:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.200 02:15:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:00.200 02:15:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:00.200 Cannot find device "nvmf_tgt_br" 00:14:00.200 02:15:59 -- nvmf/common.sh@154 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.200 Cannot find device "nvmf_tgt_br2" 00:14:00.200 02:15:59 -- nvmf/common.sh@155 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.200 02:15:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.200 Cannot find device "nvmf_tgt_br" 00:14:00.200 02:15:59 -- nvmf/common.sh@157 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.200 Cannot find device "nvmf_tgt_br2" 00:14:00.200 02:15:59 -- nvmf/common.sh@158 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.200 02:15:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.200 02:15:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.200 02:15:59 -- nvmf/common.sh@161 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.200 02:15:59 -- nvmf/common.sh@162 -- # true 00:14:00.200 02:15:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.200 02:15:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.200 02:15:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.200 02:15:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.200 02:15:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.200 02:15:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.459 02:15:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.459 02:15:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.459 02:15:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.459 02:15:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.459 02:15:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.459 02:15:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.459 02:15:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.459 02:15:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.459 02:15:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.459 02:15:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.459 02:15:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.459 02:15:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.459 02:15:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.459 02:15:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.459 02:15:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.459 02:15:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.459 02:15:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.459 02:15:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:00.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:00.459 00:14:00.459 --- 10.0.0.2 ping statistics --- 00:14:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.459 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:00.459 02:15:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:00.459 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.459 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:00.459 00:14:00.459 --- 10.0.0.3 ping statistics --- 00:14:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.459 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:00.459 02:15:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:14:00.459 00:14:00.459 --- 10.0.0.1 ping statistics --- 00:14:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.459 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:00.459 02:15:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.459 02:15:59 -- nvmf/common.sh@421 -- # return 0 00:14:00.459 02:15:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.459 02:15:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.459 02:15:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.459 02:15:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.459 02:15:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.459 02:15:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.459 02:15:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.459 02:15:59 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:00.459 02:15:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.459 02:15:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:00.459 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:14:00.459 02:15:59 -- nvmf/common.sh@469 -- # nvmfpid=82852 00:14:00.459 02:15:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:00.459 02:15:59 -- nvmf/common.sh@470 -- # waitforlisten 82852 00:14:00.459 02:15:59 -- common/autotest_common.sh@819 -- # '[' -z 82852 ']' 00:14:00.459 02:15:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.459 02:15:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.459 02:15:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.459 02:15:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:00.459 02:15:59 -- common/autotest_common.sh@10 -- # set +x 00:14:00.459 [2024-07-15 02:15:59.974304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:00.459 [2024-07-15 02:15:59.974397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.717 [2024-07-15 02:16:00.116417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.717 [2024-07-15 02:16:00.205194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.717 [2024-07-15 02:16:00.205412] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.717 [2024-07-15 02:16:00.205428] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.717 [2024-07-15 02:16:00.205443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.717 [2024-07-15 02:16:00.205496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.674 02:16:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:01.674 02:16:00 -- common/autotest_common.sh@852 -- # return 0 00:14:01.674 02:16:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.674 02:16:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:01.674 02:16:00 -- common/autotest_common.sh@10 -- # set +x 00:14:01.674 02:16:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.674 02:16:01 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.932 [2024-07-15 02:16:01.268076] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:01.932 02:16:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:01.932 02:16:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.932 02:16:01 -- common/autotest_common.sh@10 -- # set +x 00:14:01.932 ************************************ 00:14:01.932 START TEST lvs_grow_clean 00:14:01.932 ************************************ 00:14:01.932 02:16:01 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:01.932 02:16:01 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.190 02:16:01 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:02.190 02:16:01 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:02.448 02:16:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:02.448 02:16:01 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:02.448 02:16:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.706 02:16:02 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:02.706 02:16:02 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:02.706 02:16:02 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b31c31fe-acad-4986-aaff-6b274ead1a75 lvol 150 00:14:02.964 02:16:02 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3d940511-1e01-41a6-b203-3bbbd37a0cca 00:14:02.964 02:16:02 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:02.964 02:16:02 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:03.222 [2024-07-15 02:16:02.610570] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:03.222 [2024-07-15 02:16:02.610696] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:03.222 true 00:14:03.222 02:16:02 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:03.222 02:16:02 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:03.479 02:16:02 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:03.479 02:16:02 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.737 02:16:03 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d940511-1e01-41a6-b203-3bbbd37a0cca 00:14:03.995 02:16:03 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:04.253 [2024-07-15 02:16:03.667183] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.253 02:16:03 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.510 02:16:03 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83014 00:14:04.510 02:16:03 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:04.510 02:16:03 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.510 02:16:03 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83014 /var/tmp/bdevperf.sock 00:14:04.510 02:16:03 -- common/autotest_common.sh@819 -- # '[' -z 83014 ']' 00:14:04.510 02:16:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.510 02:16:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.510 02:16:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.510 02:16:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.510 02:16:03 -- common/autotest_common.sh@10 -- # set +x 00:14:04.510 [2024-07-15 02:16:03.997440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:04.510 [2024-07-15 02:16:03.997555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83014 ] 00:14:04.768 [2024-07-15 02:16:04.140016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.768 [2024-07-15 02:16:04.254067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.700 02:16:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.700 02:16:04 -- common/autotest_common.sh@852 -- # return 0 00:14:05.700 02:16:04 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:05.958 Nvme0n1 00:14:05.958 02:16:05 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:06.215 [ 00:14:06.215 { 00:14:06.215 "aliases": [ 00:14:06.215 "3d940511-1e01-41a6-b203-3bbbd37a0cca" 00:14:06.215 ], 00:14:06.215 "assigned_rate_limits": { 00:14:06.215 "r_mbytes_per_sec": 0, 00:14:06.215 "rw_ios_per_sec": 0, 00:14:06.215 "rw_mbytes_per_sec": 0, 00:14:06.215 "w_mbytes_per_sec": 0 00:14:06.215 }, 00:14:06.215 "block_size": 4096, 00:14:06.215 "claimed": false, 00:14:06.215 "driver_specific": { 00:14:06.215 "mp_policy": "active_passive", 00:14:06.215 "nvme": [ 00:14:06.215 { 00:14:06.215 "ctrlr_data": { 00:14:06.215 "ana_reporting": false, 00:14:06.215 "cntlid": 1, 00:14:06.215 "firmware_revision": "24.01.1", 00:14:06.215 "model_number": "SPDK bdev Controller", 00:14:06.215 "multi_ctrlr": true, 00:14:06.215 "oacs": { 00:14:06.215 "firmware": 0, 00:14:06.215 "format": 0, 00:14:06.215 "ns_manage": 0, 00:14:06.215 "security": 0 00:14:06.215 }, 00:14:06.215 "serial_number": "SPDK0", 00:14:06.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.215 "vendor_id": "0x8086" 00:14:06.215 }, 00:14:06.215 "ns_data": { 00:14:06.215 "can_share": true, 00:14:06.215 "id": 1 00:14:06.215 }, 00:14:06.215 "trid": { 00:14:06.215 "adrfam": "IPv4", 00:14:06.215 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:06.215 "traddr": "10.0.0.2", 00:14:06.215 "trsvcid": "4420", 00:14:06.215 "trtype": "TCP" 00:14:06.215 }, 00:14:06.215 "vs": { 00:14:06.215 "nvme_version": "1.3" 00:14:06.215 } 00:14:06.215 } 00:14:06.215 ] 00:14:06.215 }, 00:14:06.215 "name": "Nvme0n1", 00:14:06.215 "num_blocks": 38912, 00:14:06.215 "product_name": "NVMe disk", 00:14:06.215 "supported_io_types": { 00:14:06.215 "abort": true, 00:14:06.215 "compare": true, 00:14:06.215 "compare_and_write": true, 00:14:06.215 "flush": true, 00:14:06.215 "nvme_admin": true, 00:14:06.215 "nvme_io": true, 00:14:06.215 "read": true, 00:14:06.215 "reset": true, 00:14:06.215 "unmap": true, 00:14:06.215 "write": true, 00:14:06.215 "write_zeroes": true 00:14:06.215 }, 00:14:06.215 "uuid": "3d940511-1e01-41a6-b203-3bbbd37a0cca", 00:14:06.215 "zoned": false 00:14:06.215 } 00:14:06.215 ] 00:14:06.215 02:16:05 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83069 00:14:06.215 02:16:05 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:06.215 02:16:05 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:06.215 Running I/O for 10 seconds... 00:14:07.148 Latency(us) 00:14:07.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.148 Nvme0n1 : 1.00 7838.00 30.62 0.00 0.00 0.00 0.00 0.00 00:14:07.148 =================================================================================================================== 00:14:07.149 Total : 7838.00 30.62 0.00 0.00 0.00 0.00 0.00 00:14:07.149 00:14:08.085 02:16:07 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:08.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.343 Nvme0n1 : 2.00 7916.00 30.92 0.00 0.00 0.00 0.00 0.00 00:14:08.343 =================================================================================================================== 00:14:08.343 Total : 7916.00 30.92 0.00 0.00 0.00 0.00 0.00 00:14:08.343 00:14:08.606 true 00:14:08.606 02:16:07 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:08.606 02:16:07 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:08.881 02:16:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:08.881 02:16:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:08.881 02:16:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 83069 00:14:09.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.447 Nvme0n1 : 3.00 7881.67 30.79 0.00 0.00 0.00 0.00 0.00 00:14:09.447 =================================================================================================================== 00:14:09.447 Total : 7881.67 30.79 0.00 0.00 0.00 0.00 0.00 00:14:09.447 00:14:10.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.384 Nvme0n1 : 4.00 7861.00 30.71 0.00 0.00 0.00 0.00 0.00 00:14:10.384 =================================================================================================================== 00:14:10.384 Total : 7861.00 30.71 0.00 0.00 0.00 0.00 0.00 00:14:10.384 00:14:11.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.320 Nvme0n1 : 5.00 7834.40 30.60 0.00 0.00 0.00 0.00 0.00 00:14:11.320 =================================================================================================================== 00:14:11.320 Total : 7834.40 30.60 0.00 0.00 0.00 0.00 0.00 00:14:11.320 00:14:12.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.254 Nvme0n1 : 6.00 7818.67 30.54 0.00 0.00 0.00 0.00 0.00 00:14:12.254 =================================================================================================================== 00:14:12.254 Total : 7818.67 30.54 0.00 0.00 0.00 0.00 0.00 00:14:12.254 00:14:13.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.187 Nvme0n1 : 7.00 7793.29 30.44 0.00 0.00 0.00 0.00 0.00 00:14:13.187 =================================================================================================================== 00:14:13.187 Total : 7793.29 30.44 0.00 0.00 0.00 0.00 0.00 00:14:13.187 00:14:14.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.560 Nvme0n1 : 8.00 7783.75 30.41 0.00 0.00 0.00 0.00 0.00 00:14:14.560 =================================================================================================================== 00:14:14.560 Total : 7783.75 30.41 0.00 0.00 0.00 0.00 0.00 00:14:14.560 00:14:15.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.495 Nvme0n1 : 9.00 7758.56 30.31 0.00 0.00 0.00 0.00 0.00 00:14:15.495 =================================================================================================================== 00:14:15.495 Total : 7758.56 30.31 0.00 0.00 0.00 0.00 0.00 00:14:15.495 00:14:16.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.431 Nvme0n1 : 10.00 7756.70 30.30 0.00 0.00 0.00 0.00 0.00 00:14:16.431 =================================================================================================================== 00:14:16.431 Total : 7756.70 30.30 0.00 0.00 0.00 0.00 0.00 00:14:16.431 00:14:16.431 00:14:16.431 Latency(us) 00:14:16.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.431 Nvme0n1 : 10.01 7765.44 30.33 0.00 0.00 16477.34 7804.74 33602.09 00:14:16.431 =================================================================================================================== 00:14:16.431 Total : 7765.44 30.33 0.00 0.00 16477.34 7804.74 33602.09 00:14:16.431 0 00:14:16.431 02:16:15 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83014 00:14:16.431 02:16:15 -- common/autotest_common.sh@926 -- # '[' -z 83014 ']' 00:14:16.431 02:16:15 -- common/autotest_common.sh@930 -- # kill -0 83014 00:14:16.431 02:16:15 -- common/autotest_common.sh@931 -- # uname 00:14:16.431 02:16:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.431 02:16:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83014 00:14:16.431 02:16:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:16.431 02:16:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:16.431 killing process with pid 83014 00:14:16.431 02:16:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83014' 00:14:16.431 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.431 00:14:16.431 Latency(us) 00:14:16.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.431 =================================================================================================================== 00:14:16.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.431 02:16:15 -- common/autotest_common.sh@945 -- # kill 83014 00:14:16.431 02:16:15 -- common/autotest_common.sh@950 -- # wait 83014 00:14:16.699 02:16:16 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:16.964 02:16:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:16.964 02:16:16 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:17.222 02:16:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:17.222 02:16:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:17.222 02:16:16 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:17.222 [2024-07-15 02:16:16.758246] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:17.480 02:16:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:17.480 02:16:16 -- common/autotest_common.sh@640 -- # local es=0 00:14:17.480 02:16:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:17.480 02:16:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.480 02:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.480 02:16:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.480 02:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.480 02:16:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.480 02:16:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:17.480 02:16:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.480 02:16:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:17.480 02:16:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:17.480 2024/07/15 02:16:16 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b31c31fe-acad-4986-aaff-6b274ead1a75], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:17.480 request: 00:14:17.480 { 00:14:17.480 "method": "bdev_lvol_get_lvstores", 00:14:17.480 "params": { 00:14:17.480 "uuid": "b31c31fe-acad-4986-aaff-6b274ead1a75" 00:14:17.480 } 00:14:17.480 } 00:14:17.480 Got JSON-RPC error response 00:14:17.480 GoRPCClient: error on JSON-RPC call 00:14:17.480 02:16:17 -- common/autotest_common.sh@643 -- # es=1 00:14:17.480 02:16:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:17.480 02:16:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:17.480 02:16:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:17.480 02:16:17 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.739 aio_bdev 00:14:17.739 02:16:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3d940511-1e01-41a6-b203-3bbbd37a0cca 00:14:17.739 02:16:17 -- common/autotest_common.sh@887 -- # local bdev_name=3d940511-1e01-41a6-b203-3bbbd37a0cca 00:14:17.739 02:16:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:17.739 02:16:17 -- common/autotest_common.sh@889 -- # local i 00:14:17.739 02:16:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:17.739 02:16:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:17.739 02:16:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:17.997 02:16:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d940511-1e01-41a6-b203-3bbbd37a0cca -t 2000 00:14:18.255 [ 00:14:18.255 { 00:14:18.255 "aliases": [ 00:14:18.255 "lvs/lvol" 00:14:18.255 ], 00:14:18.255 "assigned_rate_limits": { 00:14:18.255 "r_mbytes_per_sec": 0, 00:14:18.255 "rw_ios_per_sec": 0, 00:14:18.255 "rw_mbytes_per_sec": 0, 00:14:18.255 "w_mbytes_per_sec": 0 00:14:18.255 }, 00:14:18.255 "block_size": 4096, 00:14:18.255 "claimed": false, 00:14:18.255 "driver_specific": { 00:14:18.255 "lvol": { 00:14:18.255 "base_bdev": "aio_bdev", 00:14:18.255 "clone": false, 00:14:18.255 "esnap_clone": false, 00:14:18.255 "lvol_store_uuid": "b31c31fe-acad-4986-aaff-6b274ead1a75", 00:14:18.255 "snapshot": false, 00:14:18.255 "thin_provision": false 00:14:18.255 } 00:14:18.255 }, 00:14:18.255 "name": "3d940511-1e01-41a6-b203-3bbbd37a0cca", 00:14:18.255 "num_blocks": 38912, 00:14:18.255 "product_name": "Logical Volume", 00:14:18.255 "supported_io_types": { 00:14:18.255 "abort": false, 00:14:18.255 "compare": false, 00:14:18.255 "compare_and_write": false, 00:14:18.255 "flush": false, 00:14:18.255 "nvme_admin": false, 00:14:18.255 "nvme_io": false, 00:14:18.255 "read": true, 00:14:18.255 "reset": true, 00:14:18.255 "unmap": true, 00:14:18.255 "write": true, 00:14:18.255 "write_zeroes": true 00:14:18.255 }, 00:14:18.255 "uuid": "3d940511-1e01-41a6-b203-3bbbd37a0cca", 00:14:18.255 "zoned": false 00:14:18.255 } 00:14:18.255 ] 00:14:18.255 02:16:17 -- common/autotest_common.sh@895 -- # return 0 00:14:18.255 02:16:17 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:18.255 02:16:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:18.514 02:16:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:18.514 02:16:17 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:18.514 02:16:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:18.773 02:16:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:18.773 02:16:18 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3d940511-1e01-41a6-b203-3bbbd37a0cca 00:14:19.031 02:16:18 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b31c31fe-acad-4986-aaff-6b274ead1a75 00:14:19.289 02:16:18 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:19.547 02:16:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:19.806 ************************************ 00:14:19.806 END TEST lvs_grow_clean 00:14:19.806 ************************************ 00:14:19.806 00:14:19.806 real 0m18.014s 00:14:19.806 user 0m17.135s 00:14:19.806 sys 0m2.420s 00:14:19.806 02:16:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.806 02:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:19.806 02:16:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:19.806 02:16:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:19.806 02:16:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.806 02:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:20.064 ************************************ 00:14:20.064 START TEST lvs_grow_dirty 00:14:20.064 ************************************ 00:14:20.064 02:16:19 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:20.064 02:16:19 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:20.321 02:16:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:20.321 02:16:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:20.578 02:16:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:20.578 02:16:19 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:20.578 02:16:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:20.836 02:16:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:20.836 02:16:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:20.836 02:16:20 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 95b52149-a06f-4bc0-a312-4fbaf1026217 lvol 150 00:14:21.093 02:16:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:21.093 02:16:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:21.093 02:16:20 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:21.351 [2024-07-15 02:16:20.727423] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:21.351 [2024-07-15 02:16:20.727512] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:21.351 true 00:14:21.351 02:16:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:21.351 02:16:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:21.608 02:16:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:21.608 02:16:20 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.866 02:16:21 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:21.866 02:16:21 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.123 02:16:21 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.689 02:16:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83456 00:14:22.689 02:16:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.689 02:16:21 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:22.689 02:16:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83456 /var/tmp/bdevperf.sock 00:14:22.689 02:16:21 -- common/autotest_common.sh@819 -- # '[' -z 83456 ']' 00:14:22.689 02:16:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.689 02:16:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:22.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.689 02:16:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.689 02:16:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:22.689 02:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:22.689 [2024-07-15 02:16:21.987660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:22.689 [2024-07-15 02:16:21.987797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83456 ] 00:14:22.689 [2024-07-15 02:16:22.122474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.689 [2024-07-15 02:16:22.208831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.623 02:16:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.623 02:16:22 -- common/autotest_common.sh@852 -- # return 0 00:14:23.623 02:16:22 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:23.888 Nvme0n1 00:14:23.889 02:16:23 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:24.147 [ 00:14:24.147 { 00:14:24.147 "aliases": [ 00:14:24.147 "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490" 00:14:24.147 ], 00:14:24.147 "assigned_rate_limits": { 00:14:24.147 "r_mbytes_per_sec": 0, 00:14:24.147 "rw_ios_per_sec": 0, 00:14:24.147 "rw_mbytes_per_sec": 0, 00:14:24.147 "w_mbytes_per_sec": 0 00:14:24.147 }, 00:14:24.147 "block_size": 4096, 00:14:24.147 "claimed": false, 00:14:24.147 "driver_specific": { 00:14:24.147 "mp_policy": "active_passive", 00:14:24.147 "nvme": [ 00:14:24.147 { 00:14:24.147 "ctrlr_data": { 00:14:24.147 "ana_reporting": false, 00:14:24.147 "cntlid": 1, 00:14:24.147 "firmware_revision": "24.01.1", 00:14:24.147 "model_number": "SPDK bdev Controller", 00:14:24.147 "multi_ctrlr": true, 00:14:24.147 "oacs": { 00:14:24.147 "firmware": 0, 00:14:24.147 "format": 0, 00:14:24.147 "ns_manage": 0, 00:14:24.147 "security": 0 00:14:24.147 }, 00:14:24.147 "serial_number": "SPDK0", 00:14:24.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:24.147 "vendor_id": "0x8086" 00:14:24.147 }, 00:14:24.147 "ns_data": { 00:14:24.147 "can_share": true, 00:14:24.147 "id": 1 00:14:24.147 }, 00:14:24.147 "trid": { 00:14:24.147 "adrfam": "IPv4", 00:14:24.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:24.147 "traddr": "10.0.0.2", 00:14:24.147 "trsvcid": "4420", 00:14:24.147 "trtype": "TCP" 00:14:24.147 }, 00:14:24.147 "vs": { 00:14:24.147 "nvme_version": "1.3" 00:14:24.147 } 00:14:24.147 } 00:14:24.147 ] 00:14:24.147 }, 00:14:24.147 "name": "Nvme0n1", 00:14:24.147 "num_blocks": 38912, 00:14:24.147 "product_name": "NVMe disk", 00:14:24.147 "supported_io_types": { 00:14:24.147 "abort": true, 00:14:24.147 "compare": true, 00:14:24.147 "compare_and_write": true, 00:14:24.147 "flush": true, 00:14:24.147 "nvme_admin": true, 00:14:24.147 "nvme_io": true, 00:14:24.147 "read": true, 00:14:24.147 "reset": true, 00:14:24.147 "unmap": true, 00:14:24.147 "write": true, 00:14:24.147 "write_zeroes": true 00:14:24.147 }, 00:14:24.147 "uuid": "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490", 00:14:24.147 "zoned": false 00:14:24.147 } 00:14:24.147 ] 00:14:24.147 02:16:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83504 00:14:24.147 02:16:23 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.147 02:16:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:24.147 Running I/O for 10 seconds... 00:14:25.081 Latency(us) 00:14:25.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.081 Nvme0n1 : 1.00 8054.00 31.46 0.00 0.00 0.00 0.00 0.00 00:14:25.081 =================================================================================================================== 00:14:25.081 Total : 8054.00 31.46 0.00 0.00 0.00 0.00 0.00 00:14:25.081 00:14:26.015 02:16:25 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:26.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.273 Nvme0n1 : 2.00 7988.00 31.20 0.00 0.00 0.00 0.00 0.00 00:14:26.273 =================================================================================================================== 00:14:26.273 Total : 7988.00 31.20 0.00 0.00 0.00 0.00 0.00 00:14:26.273 00:14:26.273 true 00:14:26.273 02:16:25 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:26.273 02:16:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:26.864 02:16:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:26.864 02:16:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:26.864 02:16:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 83504 00:14:27.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.123 Nvme0n1 : 3.00 7994.00 31.23 0.00 0.00 0.00 0.00 0.00 00:14:27.123 =================================================================================================================== 00:14:27.123 Total : 7994.00 31.23 0.00 0.00 0.00 0.00 0.00 00:14:27.123 00:14:28.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.059 Nvme0n1 : 4.00 7942.25 31.02 0.00 0.00 0.00 0.00 0.00 00:14:28.059 =================================================================================================================== 00:14:28.059 Total : 7942.25 31.02 0.00 0.00 0.00 0.00 0.00 00:14:28.059 00:14:29.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.436 Nvme0n1 : 5.00 7899.00 30.86 0.00 0.00 0.00 0.00 0.00 00:14:29.436 =================================================================================================================== 00:14:29.436 Total : 7899.00 30.86 0.00 0.00 0.00 0.00 0.00 00:14:29.436 00:14:30.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.372 Nvme0n1 : 6.00 7883.00 30.79 0.00 0.00 0.00 0.00 0.00 00:14:30.372 =================================================================================================================== 00:14:30.372 Total : 7883.00 30.79 0.00 0.00 0.00 0.00 0.00 00:14:30.372 00:14:31.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.306 Nvme0n1 : 7.00 7677.57 29.99 0.00 0.00 0.00 0.00 0.00 00:14:31.306 =================================================================================================================== 00:14:31.306 Total : 7677.57 29.99 0.00 0.00 0.00 0.00 0.00 00:14:31.306 00:14:32.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.242 Nvme0n1 : 8.00 7656.62 29.91 0.00 0.00 0.00 0.00 0.00 00:14:32.242 =================================================================================================================== 00:14:32.242 Total : 7656.62 29.91 0.00 0.00 0.00 0.00 0.00 00:14:32.242 00:14:33.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.177 Nvme0n1 : 9.00 7629.44 29.80 0.00 0.00 0.00 0.00 0.00 00:14:33.177 =================================================================================================================== 00:14:33.177 Total : 7629.44 29.80 0.00 0.00 0.00 0.00 0.00 00:14:33.177 00:14:34.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.112 Nvme0n1 : 10.00 7619.10 29.76 0.00 0.00 0.00 0.00 0.00 00:14:34.112 =================================================================================================================== 00:14:34.112 Total : 7619.10 29.76 0.00 0.00 0.00 0.00 0.00 00:14:34.112 00:14:34.112 00:14:34.112 Latency(us) 00:14:34.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.112 Nvme0n1 : 10.01 7618.79 29.76 0.00 0.00 16790.32 6017.40 191603.43 00:14:34.112 =================================================================================================================== 00:14:34.112 Total : 7618.79 29.76 0.00 0.00 16790.32 6017.40 191603.43 00:14:34.112 0 00:14:34.112 02:16:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83456 00:14:34.112 02:16:33 -- common/autotest_common.sh@926 -- # '[' -z 83456 ']' 00:14:34.112 02:16:33 -- common/autotest_common.sh@930 -- # kill -0 83456 00:14:34.112 02:16:33 -- common/autotest_common.sh@931 -- # uname 00:14:34.112 02:16:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.112 02:16:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83456 00:14:34.112 02:16:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:34.112 02:16:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:34.112 killing process with pid 83456 00:14:34.112 02:16:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83456' 00:14:34.112 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.112 00:14:34.112 Latency(us) 00:14:34.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.112 =================================================================================================================== 00:14:34.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.112 02:16:33 -- common/autotest_common.sh@945 -- # kill 83456 00:14:34.112 02:16:33 -- common/autotest_common.sh@950 -- # wait 83456 00:14:34.679 02:16:33 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:34.679 02:16:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:34.679 02:16:34 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:34.937 02:16:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:34.937 02:16:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:34.937 02:16:34 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 82852 00:14:34.937 02:16:34 -- target/nvmf_lvs_grow.sh@74 -- # wait 82852 00:14:35.195 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 82852 Killed "${NVMF_APP[@]}" "$@" 00:14:35.195 02:16:34 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:35.195 02:16:34 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:35.195 02:16:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.195 02:16:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:35.195 02:16:34 -- common/autotest_common.sh@10 -- # set +x 00:14:35.195 02:16:34 -- nvmf/common.sh@469 -- # nvmfpid=83654 00:14:35.195 02:16:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:35.195 02:16:34 -- nvmf/common.sh@470 -- # waitforlisten 83654 00:14:35.195 02:16:34 -- common/autotest_common.sh@819 -- # '[' -z 83654 ']' 00:14:35.195 02:16:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.195 02:16:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:35.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.195 02:16:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.195 02:16:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:35.195 02:16:34 -- common/autotest_common.sh@10 -- # set +x 00:14:35.195 [2024-07-15 02:16:34.558294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:35.195 [2024-07-15 02:16:34.558410] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.195 [2024-07-15 02:16:34.700171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.451 [2024-07-15 02:16:34.788485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.452 [2024-07-15 02:16:34.788653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.452 [2024-07-15 02:16:34.788666] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.452 [2024-07-15 02:16:34.788675] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.452 [2024-07-15 02:16:34.788699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.018 02:16:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.018 02:16:35 -- common/autotest_common.sh@852 -- # return 0 00:14:36.018 02:16:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.018 02:16:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.018 02:16:35 -- common/autotest_common.sh@10 -- # set +x 00:14:36.018 02:16:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.018 02:16:35 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.276 [2024-07-15 02:16:35.731343] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:36.276 [2024-07-15 02:16:35.731694] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:36.276 [2024-07-15 02:16:35.731850] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:36.276 02:16:35 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:36.276 02:16:35 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:36.276 02:16:35 -- common/autotest_common.sh@887 -- # local bdev_name=5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:36.276 02:16:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:36.276 02:16:35 -- common/autotest_common.sh@889 -- # local i 00:14:36.276 02:16:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:36.276 02:16:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:36.276 02:16:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.535 02:16:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 -t 2000 00:14:36.794 [ 00:14:36.794 { 00:14:36.794 "aliases": [ 00:14:36.794 "lvs/lvol" 00:14:36.794 ], 00:14:36.794 "assigned_rate_limits": { 00:14:36.794 "r_mbytes_per_sec": 0, 00:14:36.794 "rw_ios_per_sec": 0, 00:14:36.794 "rw_mbytes_per_sec": 0, 00:14:36.794 "w_mbytes_per_sec": 0 00:14:36.794 }, 00:14:36.794 "block_size": 4096, 00:14:36.794 "claimed": false, 00:14:36.794 "driver_specific": { 00:14:36.794 "lvol": { 00:14:36.794 "base_bdev": "aio_bdev", 00:14:36.794 "clone": false, 00:14:36.794 "esnap_clone": false, 00:14:36.794 "lvol_store_uuid": "95b52149-a06f-4bc0-a312-4fbaf1026217", 00:14:36.794 "snapshot": false, 00:14:36.794 "thin_provision": false 00:14:36.794 } 00:14:36.794 }, 00:14:36.794 "name": "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490", 00:14:36.794 "num_blocks": 38912, 00:14:36.794 "product_name": "Logical Volume", 00:14:36.794 "supported_io_types": { 00:14:36.794 "abort": false, 00:14:36.794 "compare": false, 00:14:36.794 "compare_and_write": false, 00:14:36.794 "flush": false, 00:14:36.794 "nvme_admin": false, 00:14:36.794 "nvme_io": false, 00:14:36.794 "read": true, 00:14:36.794 "reset": true, 00:14:36.794 "unmap": true, 00:14:36.794 "write": true, 00:14:36.794 "write_zeroes": true 00:14:36.794 }, 00:14:36.794 "uuid": "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490", 00:14:36.794 "zoned": false 00:14:36.794 } 00:14:36.794 ] 00:14:36.794 02:16:36 -- common/autotest_common.sh@895 -- # return 0 00:14:36.794 02:16:36 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:36.794 02:16:36 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:37.052 02:16:36 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:37.052 02:16:36 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:37.052 02:16:36 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:37.313 02:16:36 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:37.313 02:16:36 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.577 [2024-07-15 02:16:36.872766] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:37.577 02:16:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:37.577 02:16:36 -- common/autotest_common.sh@640 -- # local es=0 00:14:37.577 02:16:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:37.577 02:16:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.577 02:16:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.577 02:16:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.577 02:16:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.577 02:16:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.577 02:16:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.577 02:16:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.577 02:16:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:37.577 02:16:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:37.577 2024/07/15 02:16:37 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:95b52149-a06f-4bc0-a312-4fbaf1026217], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:37.577 request: 00:14:37.577 { 00:14:37.577 "method": "bdev_lvol_get_lvstores", 00:14:37.577 "params": { 00:14:37.577 "uuid": "95b52149-a06f-4bc0-a312-4fbaf1026217" 00:14:37.577 } 00:14:37.577 } 00:14:37.577 Got JSON-RPC error response 00:14:37.577 GoRPCClient: error on JSON-RPC call 00:14:37.834 02:16:37 -- common/autotest_common.sh@643 -- # es=1 00:14:37.834 02:16:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:37.834 02:16:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:37.834 02:16:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:37.834 02:16:37 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:37.834 aio_bdev 00:14:38.092 02:16:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:38.092 02:16:37 -- common/autotest_common.sh@887 -- # local bdev_name=5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:38.092 02:16:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.092 02:16:37 -- common/autotest_common.sh@889 -- # local i 00:14:38.092 02:16:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.092 02:16:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.092 02:16:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:38.351 02:16:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 -t 2000 00:14:38.351 [ 00:14:38.351 { 00:14:38.351 "aliases": [ 00:14:38.351 "lvs/lvol" 00:14:38.351 ], 00:14:38.351 "assigned_rate_limits": { 00:14:38.351 "r_mbytes_per_sec": 0, 00:14:38.351 "rw_ios_per_sec": 0, 00:14:38.351 "rw_mbytes_per_sec": 0, 00:14:38.351 "w_mbytes_per_sec": 0 00:14:38.351 }, 00:14:38.351 "block_size": 4096, 00:14:38.351 "claimed": false, 00:14:38.351 "driver_specific": { 00:14:38.351 "lvol": { 00:14:38.351 "base_bdev": "aio_bdev", 00:14:38.351 "clone": false, 00:14:38.351 "esnap_clone": false, 00:14:38.351 "lvol_store_uuid": "95b52149-a06f-4bc0-a312-4fbaf1026217", 00:14:38.351 "snapshot": false, 00:14:38.351 "thin_provision": false 00:14:38.351 } 00:14:38.351 }, 00:14:38.351 "name": "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490", 00:14:38.351 "num_blocks": 38912, 00:14:38.351 "product_name": "Logical Volume", 00:14:38.351 "supported_io_types": { 00:14:38.351 "abort": false, 00:14:38.351 "compare": false, 00:14:38.351 "compare_and_write": false, 00:14:38.351 "flush": false, 00:14:38.351 "nvme_admin": false, 00:14:38.351 "nvme_io": false, 00:14:38.351 "read": true, 00:14:38.351 "reset": true, 00:14:38.351 "unmap": true, 00:14:38.351 "write": true, 00:14:38.351 "write_zeroes": true 00:14:38.351 }, 00:14:38.351 "uuid": "5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490", 00:14:38.351 "zoned": false 00:14:38.351 } 00:14:38.351 ] 00:14:38.351 02:16:37 -- common/autotest_common.sh@895 -- # return 0 00:14:38.351 02:16:37 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:38.351 02:16:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:38.609 02:16:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:38.609 02:16:38 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:38.609 02:16:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:38.867 02:16:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:38.867 02:16:38 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5bd9e3e6-3d27-4e5c-b0b0-f93449ee0490 00:14:39.125 02:16:38 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95b52149-a06f-4bc0-a312-4fbaf1026217 00:14:39.383 02:16:38 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:39.640 02:16:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:39.898 00:14:39.898 real 0m20.052s 00:14:39.898 user 0m40.711s 00:14:39.898 sys 0m9.067s 00:14:39.898 ************************************ 00:14:39.898 END TEST lvs_grow_dirty 00:14:39.898 ************************************ 00:14:39.898 02:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.898 02:16:39 -- common/autotest_common.sh@10 -- # set +x 00:14:40.155 02:16:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:40.155 02:16:39 -- common/autotest_common.sh@796 -- # type=--id 00:14:40.155 02:16:39 -- common/autotest_common.sh@797 -- # id=0 00:14:40.155 02:16:39 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:40.155 02:16:39 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:40.155 02:16:39 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:40.155 02:16:39 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:40.155 02:16:39 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:40.155 02:16:39 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:40.155 nvmf_trace.0 00:14:40.155 02:16:39 -- common/autotest_common.sh@811 -- # return 0 00:14:40.155 02:16:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:40.155 02:16:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:40.155 02:16:39 -- nvmf/common.sh@116 -- # sync 00:14:40.155 02:16:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:40.155 02:16:39 -- nvmf/common.sh@119 -- # set +e 00:14:40.155 02:16:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:40.155 02:16:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:40.155 rmmod nvme_tcp 00:14:40.155 rmmod nvme_fabrics 00:14:40.155 rmmod nvme_keyring 00:14:40.413 02:16:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:40.413 02:16:39 -- nvmf/common.sh@123 -- # set -e 00:14:40.413 02:16:39 -- nvmf/common.sh@124 -- # return 0 00:14:40.413 02:16:39 -- nvmf/common.sh@477 -- # '[' -n 83654 ']' 00:14:40.413 02:16:39 -- nvmf/common.sh@478 -- # killprocess 83654 00:14:40.413 02:16:39 -- common/autotest_common.sh@926 -- # '[' -z 83654 ']' 00:14:40.413 02:16:39 -- common/autotest_common.sh@930 -- # kill -0 83654 00:14:40.413 02:16:39 -- common/autotest_common.sh@931 -- # uname 00:14:40.413 02:16:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.413 02:16:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83654 00:14:40.413 02:16:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.413 02:16:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.413 killing process with pid 83654 00:14:40.413 02:16:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83654' 00:14:40.413 02:16:39 -- common/autotest_common.sh@945 -- # kill 83654 00:14:40.413 02:16:39 -- common/autotest_common.sh@950 -- # wait 83654 00:14:40.413 02:16:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:40.413 02:16:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:40.413 02:16:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:40.413 02:16:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.413 02:16:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:40.413 02:16:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.413 02:16:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.413 02:16:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.671 02:16:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:40.671 00:14:40.671 real 0m40.520s 00:14:40.671 user 1m3.848s 00:14:40.671 sys 0m12.215s 00:14:40.671 02:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.671 ************************************ 00:14:40.671 END TEST nvmf_lvs_grow 00:14:40.671 ************************************ 00:14:40.671 02:16:39 -- common/autotest_common.sh@10 -- # set +x 00:14:40.671 02:16:40 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:40.671 02:16:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:40.671 02:16:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.671 02:16:40 -- common/autotest_common.sh@10 -- # set +x 00:14:40.671 ************************************ 00:14:40.671 START TEST nvmf_bdev_io_wait 00:14:40.671 ************************************ 00:14:40.671 02:16:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:40.671 * Looking for test storage... 00:14:40.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:40.671 02:16:40 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.671 02:16:40 -- nvmf/common.sh@7 -- # uname -s 00:14:40.671 02:16:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.671 02:16:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.671 02:16:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.671 02:16:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.671 02:16:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.671 02:16:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.671 02:16:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.671 02:16:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.671 02:16:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.671 02:16:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.671 02:16:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:40.671 02:16:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:40.671 02:16:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.671 02:16:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.671 02:16:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.671 02:16:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.671 02:16:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.671 02:16:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.671 02:16:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.671 02:16:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.672 02:16:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.672 02:16:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.672 02:16:40 -- paths/export.sh@5 -- # export PATH 00:14:40.672 02:16:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.672 02:16:40 -- nvmf/common.sh@46 -- # : 0 00:14:40.672 02:16:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:40.672 02:16:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:40.672 02:16:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:40.672 02:16:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.672 02:16:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.672 02:16:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:40.672 02:16:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:40.672 02:16:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:40.672 02:16:40 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.672 02:16:40 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.672 02:16:40 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:40.672 02:16:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:40.672 02:16:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.672 02:16:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:40.672 02:16:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:40.672 02:16:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:40.672 02:16:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.672 02:16:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.672 02:16:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.672 02:16:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:40.672 02:16:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:40.672 02:16:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:40.672 02:16:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:40.672 02:16:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:40.672 02:16:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:40.672 02:16:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.672 02:16:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.672 02:16:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:40.672 02:16:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:40.672 02:16:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.672 02:16:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.672 02:16:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.672 02:16:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.672 02:16:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.672 02:16:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.672 02:16:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.672 02:16:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.672 02:16:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:40.672 02:16:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:40.672 Cannot find device "nvmf_tgt_br" 00:14:40.672 02:16:40 -- nvmf/common.sh@154 -- # true 00:14:40.672 02:16:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.672 Cannot find device "nvmf_tgt_br2" 00:14:40.672 02:16:40 -- nvmf/common.sh@155 -- # true 00:14:40.672 02:16:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:40.672 02:16:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:40.672 Cannot find device "nvmf_tgt_br" 00:14:40.672 02:16:40 -- nvmf/common.sh@157 -- # true 00:14:40.672 02:16:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:40.672 Cannot find device "nvmf_tgt_br2" 00:14:40.672 02:16:40 -- nvmf/common.sh@158 -- # true 00:14:40.672 02:16:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:40.930 02:16:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:40.930 02:16:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.930 02:16:40 -- nvmf/common.sh@161 -- # true 00:14:40.930 02:16:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.930 02:16:40 -- nvmf/common.sh@162 -- # true 00:14:40.930 02:16:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.930 02:16:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.930 02:16:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.930 02:16:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.930 02:16:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.930 02:16:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.930 02:16:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.930 02:16:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:40.930 02:16:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:40.930 02:16:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:40.930 02:16:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:40.930 02:16:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:40.930 02:16:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:40.930 02:16:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.930 02:16:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.930 02:16:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.930 02:16:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:40.930 02:16:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:40.930 02:16:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.930 02:16:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.930 02:16:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.930 02:16:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.930 02:16:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.930 02:16:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:40.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:40.930 00:14:40.930 --- 10.0.0.2 ping statistics --- 00:14:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.930 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:40.930 02:16:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:40.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:14:40.930 00:14:40.930 --- 10.0.0.3 ping statistics --- 00:14:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.930 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:40.930 02:16:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:40.930 00:14:40.930 --- 10.0.0.1 ping statistics --- 00:14:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.930 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:40.930 02:16:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.930 02:16:40 -- nvmf/common.sh@421 -- # return 0 00:14:40.930 02:16:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:40.930 02:16:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.930 02:16:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:40.930 02:16:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:40.930 02:16:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.930 02:16:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:40.930 02:16:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:41.188 02:16:40 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:41.188 02:16:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:41.188 02:16:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:41.188 02:16:40 -- common/autotest_common.sh@10 -- # set +x 00:14:41.188 02:16:40 -- nvmf/common.sh@469 -- # nvmfpid=84067 00:14:41.188 02:16:40 -- nvmf/common.sh@470 -- # waitforlisten 84067 00:14:41.188 02:16:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:41.188 02:16:40 -- common/autotest_common.sh@819 -- # '[' -z 84067 ']' 00:14:41.188 02:16:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.188 02:16:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.188 02:16:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.188 02:16:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.188 02:16:40 -- common/autotest_common.sh@10 -- # set +x 00:14:41.188 [2024-07-15 02:16:40.562874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:41.188 [2024-07-15 02:16:40.562977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.188 [2024-07-15 02:16:40.706341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.446 [2024-07-15 02:16:40.801034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.446 [2024-07-15 02:16:40.801366] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.446 [2024-07-15 02:16:40.801386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.446 [2024-07-15 02:16:40.801411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.446 [2024-07-15 02:16:40.801561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.446 [2024-07-15 02:16:40.801748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.446 [2024-07-15 02:16:40.802496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.446 [2024-07-15 02:16:40.802530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.011 02:16:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.011 02:16:41 -- common/autotest_common.sh@852 -- # return 0 00:14:42.011 02:16:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:42.011 02:16:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:42.011 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 02:16:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 [2024-07-15 02:16:41.662449] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 Malloc0 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.269 02:16:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.269 02:16:41 -- common/autotest_common.sh@10 -- # set +x 00:14:42.269 [2024-07-15 02:16:41.721758] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.269 02:16:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84120 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@30 -- # READ_PID=84122 00:14:42.269 02:16:41 -- nvmf/common.sh@520 -- # config=() 00:14:42.269 02:16:41 -- nvmf/common.sh@520 -- # local subsystem config 00:14:42.269 02:16:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:42.269 02:16:41 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:42.269 02:16:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:42.269 { 00:14:42.269 "params": { 00:14:42.269 "name": "Nvme$subsystem", 00:14:42.269 "trtype": "$TEST_TRANSPORT", 00:14:42.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:42.269 "adrfam": "ipv4", 00:14:42.269 "trsvcid": "$NVMF_PORT", 00:14:42.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:42.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:42.269 "hdgst": ${hdgst:-false}, 00:14:42.269 "ddgst": ${ddgst:-false} 00:14:42.269 }, 00:14:42.269 "method": "bdev_nvme_attach_controller" 00:14:42.269 } 00:14:42.269 EOF 00:14:42.269 )") 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84124 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # config=() 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # local subsystem config 00:14:42.270 02:16:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:42.270 { 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme$subsystem", 00:14:42.270 "trtype": "$TEST_TRANSPORT", 00:14:42.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "$NVMF_PORT", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:42.270 "hdgst": ${hdgst:-false}, 00:14:42.270 "ddgst": ${ddgst:-false} 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 } 00:14:42.270 EOF 00:14:42.270 )") 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84127 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # cat 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@35 -- # sync 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # cat 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # config=() 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # local subsystem config 00:14:42.270 02:16:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:42.270 { 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme$subsystem", 00:14:42.270 "trtype": "$TEST_TRANSPORT", 00:14:42.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "$NVMF_PORT", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:42.270 "hdgst": ${hdgst:-false}, 00:14:42.270 "ddgst": ${ddgst:-false} 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 } 00:14:42.270 EOF 00:14:42.270 )") 00:14:42.270 02:16:41 -- nvmf/common.sh@544 -- # jq . 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # cat 00:14:42.270 02:16:41 -- nvmf/common.sh@545 -- # IFS=, 00:14:42.270 02:16:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme1", 00:14:42.270 "trtype": "tcp", 00:14:42.270 "traddr": "10.0.0.2", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "4420", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.270 "hdgst": false, 00:14:42.270 "ddgst": false 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 }' 00:14:42.270 02:16:41 -- nvmf/common.sh@544 -- # jq . 00:14:42.270 02:16:41 -- nvmf/common.sh@545 -- # IFS=, 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # config=() 00:14:42.270 02:16:41 -- nvmf/common.sh@520 -- # local subsystem config 00:14:42.270 02:16:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:42.270 { 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme$subsystem", 00:14:42.270 "trtype": "$TEST_TRANSPORT", 00:14:42.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "$NVMF_PORT", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:42.270 "hdgst": ${hdgst:-false}, 00:14:42.270 "ddgst": ${ddgst:-false} 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 } 00:14:42.270 EOF 00:14:42.270 )") 00:14:42.270 02:16:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme1", 00:14:42.270 "trtype": "tcp", 00:14:42.270 "traddr": "10.0.0.2", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "4420", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.270 "hdgst": false, 00:14:42.270 "ddgst": false 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 }' 00:14:42.270 02:16:41 -- nvmf/common.sh@544 -- # jq . 00:14:42.270 02:16:41 -- nvmf/common.sh@542 -- # cat 00:14:42.270 02:16:41 -- nvmf/common.sh@545 -- # IFS=, 00:14:42.270 02:16:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme1", 00:14:42.270 "trtype": "tcp", 00:14:42.270 "traddr": "10.0.0.2", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "4420", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.270 "hdgst": false, 00:14:42.270 "ddgst": false 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 }' 00:14:42.270 02:16:41 -- nvmf/common.sh@544 -- # jq . 00:14:42.270 02:16:41 -- nvmf/common.sh@545 -- # IFS=, 00:14:42.270 02:16:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:42.270 "params": { 00:14:42.270 "name": "Nvme1", 00:14:42.270 "trtype": "tcp", 00:14:42.270 "traddr": "10.0.0.2", 00:14:42.270 "adrfam": "ipv4", 00:14:42.270 "trsvcid": "4420", 00:14:42.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.270 "hdgst": false, 00:14:42.270 "ddgst": false 00:14:42.270 }, 00:14:42.270 "method": "bdev_nvme_attach_controller" 00:14:42.270 }' 00:14:42.270 [2024-07-15 02:16:41.779921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:42.270 [2024-07-15 02:16:41.780017] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:42.270 02:16:41 -- target/bdev_io_wait.sh@37 -- # wait 84120 00:14:42.270 [2024-07-15 02:16:41.806580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:42.270 [2024-07-15 02:16:41.806700] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:42.270 [2024-07-15 02:16:41.814060] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:42.270 [2024-07-15 02:16:41.814138] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:42.528 [2024-07-15 02:16:41.825887] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:42.528 [2024-07-15 02:16:41.825977] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:42.528 [2024-07-15 02:16:41.982057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.528 [2024-07-15 02:16:42.060212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:42.528 [2024-07-15 02:16:42.065867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.787 [2024-07-15 02:16:42.144015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:42.787 [2024-07-15 02:16:42.144674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.787 [2024-07-15 02:16:42.234231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.787 [2024-07-15 02:16:42.238631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:42.787 Running I/O for 1 seconds... 00:14:42.787 Running I/O for 1 seconds... 00:14:42.787 [2024-07-15 02:16:42.308109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:43.045 Running I/O for 1 seconds... 00:14:43.045 Running I/O for 1 seconds... 00:14:43.979 00:14:43.979 Latency(us) 00:14:43.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.979 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:43.979 Nvme1n1 : 1.00 197209.88 770.35 0.00 0.00 646.49 271.83 1027.72 00:14:43.979 =================================================================================================================== 00:14:43.979 Total : 197209.88 770.35 0.00 0.00 646.49 271.83 1027.72 00:14:43.979 00:14:43.979 Latency(us) 00:14:43.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.979 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:43.979 Nvme1n1 : 1.02 5820.54 22.74 0.00 0.00 21680.41 9949.56 37176.79 00:14:43.979 =================================================================================================================== 00:14:43.979 Total : 5820.54 22.74 0.00 0.00 21680.41 9949.56 37176.79 00:14:43.979 00:14:43.979 Latency(us) 00:14:43.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.979 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:43.979 Nvme1n1 : 1.01 5676.72 22.17 0.00 0.00 22463.77 6940.86 43134.60 00:14:43.979 =================================================================================================================== 00:14:43.979 Total : 5676.72 22.17 0.00 0.00 22463.77 6940.86 43134.60 00:14:43.979 00:14:43.979 Latency(us) 00:14:43.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.979 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:43.979 Nvme1n1 : 1.01 8215.86 32.09 0.00 0.00 15518.91 7000.44 28240.06 00:14:43.979 =================================================================================================================== 00:14:43.979 Total : 8215.86 32.09 0.00 0.00 15518.91 7000.44 28240.06 00:14:44.236 02:16:43 -- target/bdev_io_wait.sh@38 -- # wait 84122 00:14:44.236 02:16:43 -- target/bdev_io_wait.sh@39 -- # wait 84124 00:14:44.236 02:16:43 -- target/bdev_io_wait.sh@40 -- # wait 84127 00:14:44.494 02:16:43 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.494 02:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.494 02:16:43 -- common/autotest_common.sh@10 -- # set +x 00:14:44.494 02:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.494 02:16:43 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:44.494 02:16:43 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:44.494 02:16:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.494 02:16:43 -- nvmf/common.sh@116 -- # sync 00:14:44.494 02:16:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.494 02:16:43 -- nvmf/common.sh@119 -- # set +e 00:14:44.494 02:16:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.494 02:16:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.494 rmmod nvme_tcp 00:14:44.494 rmmod nvme_fabrics 00:14:44.494 rmmod nvme_keyring 00:14:44.494 02:16:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.494 02:16:43 -- nvmf/common.sh@123 -- # set -e 00:14:44.494 02:16:43 -- nvmf/common.sh@124 -- # return 0 00:14:44.494 02:16:43 -- nvmf/common.sh@477 -- # '[' -n 84067 ']' 00:14:44.494 02:16:43 -- nvmf/common.sh@478 -- # killprocess 84067 00:14:44.494 02:16:43 -- common/autotest_common.sh@926 -- # '[' -z 84067 ']' 00:14:44.494 02:16:43 -- common/autotest_common.sh@930 -- # kill -0 84067 00:14:44.494 02:16:43 -- common/autotest_common.sh@931 -- # uname 00:14:44.494 02:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.494 02:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84067 00:14:44.494 02:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:44.494 02:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:44.494 02:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84067' 00:14:44.494 killing process with pid 84067 00:14:44.494 02:16:43 -- common/autotest_common.sh@945 -- # kill 84067 00:14:44.494 02:16:43 -- common/autotest_common.sh@950 -- # wait 84067 00:14:44.751 02:16:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:44.751 02:16:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:44.751 02:16:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:44.751 02:16:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.751 02:16:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:44.751 02:16:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.751 02:16:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.751 02:16:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.751 02:16:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:44.751 00:14:44.751 real 0m4.125s 00:14:44.751 user 0m18.417s 00:14:44.751 sys 0m1.984s 00:14:44.751 02:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.751 02:16:44 -- common/autotest_common.sh@10 -- # set +x 00:14:44.751 ************************************ 00:14:44.751 END TEST nvmf_bdev_io_wait 00:14:44.751 ************************************ 00:14:44.751 02:16:44 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:44.751 02:16:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.751 02:16:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.751 02:16:44 -- common/autotest_common.sh@10 -- # set +x 00:14:44.751 ************************************ 00:14:44.751 START TEST nvmf_queue_depth 00:14:44.751 ************************************ 00:14:44.751 02:16:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:44.751 * Looking for test storage... 00:14:44.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.751 02:16:44 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.751 02:16:44 -- nvmf/common.sh@7 -- # uname -s 00:14:44.751 02:16:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.751 02:16:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.751 02:16:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.751 02:16:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.752 02:16:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.752 02:16:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.752 02:16:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.752 02:16:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.752 02:16:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.752 02:16:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.752 02:16:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:44.752 02:16:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:44.752 02:16:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.752 02:16:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.752 02:16:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.752 02:16:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.010 02:16:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.010 02:16:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.010 02:16:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.010 02:16:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.010 02:16:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.010 02:16:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.010 02:16:44 -- paths/export.sh@5 -- # export PATH 00:14:45.010 02:16:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.010 02:16:44 -- nvmf/common.sh@46 -- # : 0 00:14:45.010 02:16:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:45.010 02:16:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:45.010 02:16:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:45.010 02:16:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.010 02:16:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.010 02:16:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:45.010 02:16:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:45.010 02:16:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:45.010 02:16:44 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:45.010 02:16:44 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:45.010 02:16:44 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.010 02:16:44 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:45.010 02:16:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:45.010 02:16:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.010 02:16:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:45.010 02:16:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:45.010 02:16:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:45.010 02:16:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.010 02:16:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.010 02:16:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.010 02:16:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:45.010 02:16:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:45.010 02:16:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:45.010 02:16:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:45.010 02:16:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:45.010 02:16:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:45.010 02:16:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.010 02:16:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.010 02:16:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.010 02:16:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:45.010 02:16:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.010 02:16:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.010 02:16:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.010 02:16:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.010 02:16:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.010 02:16:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.010 02:16:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.010 02:16:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.010 02:16:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:45.010 02:16:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:45.010 Cannot find device "nvmf_tgt_br" 00:14:45.010 02:16:44 -- nvmf/common.sh@154 -- # true 00:14:45.010 02:16:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.010 Cannot find device "nvmf_tgt_br2" 00:14:45.010 02:16:44 -- nvmf/common.sh@155 -- # true 00:14:45.010 02:16:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:45.010 02:16:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:45.010 Cannot find device "nvmf_tgt_br" 00:14:45.010 02:16:44 -- nvmf/common.sh@157 -- # true 00:14:45.010 02:16:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:45.010 Cannot find device "nvmf_tgt_br2" 00:14:45.010 02:16:44 -- nvmf/common.sh@158 -- # true 00:14:45.010 02:16:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:45.010 02:16:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:45.010 02:16:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.011 02:16:44 -- nvmf/common.sh@161 -- # true 00:14:45.011 02:16:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.011 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.011 02:16:44 -- nvmf/common.sh@162 -- # true 00:14:45.011 02:16:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.011 02:16:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.011 02:16:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.011 02:16:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.011 02:16:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.011 02:16:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.011 02:16:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.011 02:16:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.011 02:16:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.011 02:16:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:45.011 02:16:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:45.011 02:16:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:45.011 02:16:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:45.011 02:16:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.011 02:16:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.011 02:16:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.011 02:16:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:45.269 02:16:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:45.269 02:16:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.269 02:16:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.269 02:16:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.269 02:16:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.269 02:16:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.269 02:16:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:45.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:45.269 00:14:45.269 --- 10.0.0.2 ping statistics --- 00:14:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.269 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:45.269 02:16:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:45.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:14:45.269 00:14:45.269 --- 10.0.0.3 ping statistics --- 00:14:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.269 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:45.269 02:16:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:14:45.269 00:14:45.269 --- 10.0.0.1 ping statistics --- 00:14:45.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.269 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:45.269 02:16:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.269 02:16:44 -- nvmf/common.sh@421 -- # return 0 00:14:45.269 02:16:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.269 02:16:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.269 02:16:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.269 02:16:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.269 02:16:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.269 02:16:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.269 02:16:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:45.269 02:16:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:45.269 02:16:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.269 02:16:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:45.269 02:16:44 -- common/autotest_common.sh@10 -- # set +x 00:14:45.269 02:16:44 -- nvmf/common.sh@469 -- # nvmfpid=84354 00:14:45.269 02:16:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.269 02:16:44 -- nvmf/common.sh@470 -- # waitforlisten 84354 00:14:45.269 02:16:44 -- common/autotest_common.sh@819 -- # '[' -z 84354 ']' 00:14:45.269 02:16:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.269 02:16:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.269 02:16:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.269 02:16:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.269 02:16:44 -- common/autotest_common.sh@10 -- # set +x 00:14:45.269 [2024-07-15 02:16:44.712881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:45.269 [2024-07-15 02:16:44.712988] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.527 [2024-07-15 02:16:44.849787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.527 [2024-07-15 02:16:44.920593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:45.527 [2024-07-15 02:16:44.920775] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.527 [2024-07-15 02:16:44.920789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.527 [2024-07-15 02:16:44.920797] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.527 [2024-07-15 02:16:44.920821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.460 02:16:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.460 02:16:45 -- common/autotest_common.sh@852 -- # return 0 00:14:46.460 02:16:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:46.460 02:16:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:46.460 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.460 02:16:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.460 02:16:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.460 02:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.460 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.460 [2024-07-15 02:16:45.769161] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.460 02:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.460 02:16:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:46.460 02:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.460 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.460 Malloc0 00:14:46.460 02:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.460 02:16:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:46.460 02:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.460 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.460 02:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.460 02:16:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:46.460 02:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.460 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.460 02:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.460 02:16:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.461 02:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.461 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.461 [2024-07-15 02:16:45.830774] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.461 02:16:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.461 02:16:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=84405 00:14:46.461 02:16:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.461 02:16:45 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:46.461 02:16:45 -- target/queue_depth.sh@33 -- # waitforlisten 84405 /var/tmp/bdevperf.sock 00:14:46.461 02:16:45 -- common/autotest_common.sh@819 -- # '[' -z 84405 ']' 00:14:46.461 02:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.461 02:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.461 02:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.461 02:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.461 02:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:46.461 [2024-07-15 02:16:45.893017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:46.461 [2024-07-15 02:16:45.893813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84405 ] 00:14:46.719 [2024-07-15 02:16:46.041120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.719 [2024-07-15 02:16:46.137792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.285 02:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.285 02:16:46 -- common/autotest_common.sh@852 -- # return 0 00:14:47.285 02:16:46 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:47.285 02:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.285 02:16:46 -- common/autotest_common.sh@10 -- # set +x 00:14:47.543 NVMe0n1 00:14:47.543 02:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.543 02:16:46 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.543 Running I/O for 10 seconds... 00:14:57.542 00:14:57.542 Latency(us) 00:14:57.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.542 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:57.542 Verification LBA range: start 0x0 length 0x4000 00:14:57.542 NVMe0n1 : 10.07 15675.21 61.23 0.00 0.00 65066.44 16205.27 58624.93 00:14:57.542 =================================================================================================================== 00:14:57.542 Total : 15675.21 61.23 0.00 0.00 65066.44 16205.27 58624.93 00:14:57.542 0 00:14:57.801 02:16:57 -- target/queue_depth.sh@39 -- # killprocess 84405 00:14:57.801 02:16:57 -- common/autotest_common.sh@926 -- # '[' -z 84405 ']' 00:14:57.801 02:16:57 -- common/autotest_common.sh@930 -- # kill -0 84405 00:14:57.801 02:16:57 -- common/autotest_common.sh@931 -- # uname 00:14:57.801 02:16:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.801 02:16:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84405 00:14:57.801 02:16:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.801 02:16:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.801 killing process with pid 84405 00:14:57.801 02:16:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84405' 00:14:57.801 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.801 00:14:57.801 Latency(us) 00:14:57.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.801 =================================================================================================================== 00:14:57.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.801 02:16:57 -- common/autotest_common.sh@945 -- # kill 84405 00:14:57.801 02:16:57 -- common/autotest_common.sh@950 -- # wait 84405 00:14:57.801 02:16:57 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:57.801 02:16:57 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:57.801 02:16:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:57.801 02:16:57 -- nvmf/common.sh@116 -- # sync 00:14:58.060 02:16:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:58.060 02:16:57 -- nvmf/common.sh@119 -- # set +e 00:14:58.060 02:16:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:58.060 02:16:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:58.060 rmmod nvme_tcp 00:14:58.060 rmmod nvme_fabrics 00:14:58.060 rmmod nvme_keyring 00:14:58.060 02:16:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:58.060 02:16:57 -- nvmf/common.sh@123 -- # set -e 00:14:58.060 02:16:57 -- nvmf/common.sh@124 -- # return 0 00:14:58.060 02:16:57 -- nvmf/common.sh@477 -- # '[' -n 84354 ']' 00:14:58.060 02:16:57 -- nvmf/common.sh@478 -- # killprocess 84354 00:14:58.061 02:16:57 -- common/autotest_common.sh@926 -- # '[' -z 84354 ']' 00:14:58.061 02:16:57 -- common/autotest_common.sh@930 -- # kill -0 84354 00:14:58.061 02:16:57 -- common/autotest_common.sh@931 -- # uname 00:14:58.061 02:16:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.061 02:16:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84354 00:14:58.061 02:16:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:58.061 killing process with pid 84354 00:14:58.061 02:16:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:58.061 02:16:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84354' 00:14:58.061 02:16:57 -- common/autotest_common.sh@945 -- # kill 84354 00:14:58.061 02:16:57 -- common/autotest_common.sh@950 -- # wait 84354 00:14:58.319 02:16:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:58.319 02:16:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:58.319 02:16:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:58.319 02:16:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.319 02:16:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:58.319 02:16:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.319 02:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.319 02:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.319 02:16:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:58.319 ************************************ 00:14:58.319 END TEST nvmf_queue_depth 00:14:58.319 ************************************ 00:14:58.319 00:14:58.319 real 0m13.648s 00:14:58.319 user 0m22.557s 00:14:58.319 sys 0m2.631s 00:14:58.319 02:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.319 02:16:57 -- common/autotest_common.sh@10 -- # set +x 00:14:58.578 02:16:57 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.578 02:16:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:58.578 02:16:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:58.578 02:16:57 -- common/autotest_common.sh@10 -- # set +x 00:14:58.578 ************************************ 00:14:58.578 START TEST nvmf_multipath 00:14:58.578 ************************************ 00:14:58.578 02:16:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.578 * Looking for test storage... 00:14:58.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:58.578 02:16:57 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:58.578 02:16:57 -- nvmf/common.sh@7 -- # uname -s 00:14:58.578 02:16:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.578 02:16:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.578 02:16:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.579 02:16:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.579 02:16:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.579 02:16:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.579 02:16:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.579 02:16:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.579 02:16:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.579 02:16:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.579 02:16:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:58.579 02:16:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:14:58.579 02:16:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.579 02:16:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.579 02:16:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:58.579 02:16:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:58.579 02:16:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.579 02:16:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.579 02:16:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.579 02:16:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.579 02:16:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.579 02:16:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.579 02:16:58 -- paths/export.sh@5 -- # export PATH 00:14:58.579 02:16:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.579 02:16:58 -- nvmf/common.sh@46 -- # : 0 00:14:58.579 02:16:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:58.579 02:16:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:58.579 02:16:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:58.579 02:16:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.579 02:16:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.579 02:16:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:58.579 02:16:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:58.579 02:16:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:58.579 02:16:58 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.579 02:16:58 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.579 02:16:58 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:58.579 02:16:58 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:58.579 02:16:58 -- target/multipath.sh@43 -- # nvmftestinit 00:14:58.579 02:16:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:58.579 02:16:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.579 02:16:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:58.579 02:16:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:58.579 02:16:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:58.579 02:16:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.579 02:16:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.579 02:16:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.579 02:16:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:58.579 02:16:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:58.579 02:16:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:58.579 02:16:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:58.579 02:16:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:58.579 02:16:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:58.579 02:16:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.579 02:16:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.579 02:16:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:58.579 02:16:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:58.579 02:16:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:58.579 02:16:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:58.579 02:16:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:58.579 02:16:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.579 02:16:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:58.579 02:16:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:58.579 02:16:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:58.579 02:16:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:58.579 02:16:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:58.579 02:16:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:58.579 Cannot find device "nvmf_tgt_br" 00:14:58.579 02:16:58 -- nvmf/common.sh@154 -- # true 00:14:58.579 02:16:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:58.579 Cannot find device "nvmf_tgt_br2" 00:14:58.579 02:16:58 -- nvmf/common.sh@155 -- # true 00:14:58.579 02:16:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:58.579 02:16:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:58.579 Cannot find device "nvmf_tgt_br" 00:14:58.579 02:16:58 -- nvmf/common.sh@157 -- # true 00:14:58.579 02:16:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:58.579 Cannot find device "nvmf_tgt_br2" 00:14:58.579 02:16:58 -- nvmf/common.sh@158 -- # true 00:14:58.579 02:16:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:58.579 02:16:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:58.838 02:16:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:58.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.838 02:16:58 -- nvmf/common.sh@161 -- # true 00:14:58.838 02:16:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:58.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:58.838 02:16:58 -- nvmf/common.sh@162 -- # true 00:14:58.838 02:16:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:58.838 02:16:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:58.838 02:16:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:58.838 02:16:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:58.838 02:16:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:58.838 02:16:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:58.838 02:16:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:58.838 02:16:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:58.838 02:16:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:58.838 02:16:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:58.838 02:16:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:58.838 02:16:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:58.838 02:16:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:58.838 02:16:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:58.838 02:16:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:58.838 02:16:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:58.838 02:16:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:58.838 02:16:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:58.838 02:16:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:58.838 02:16:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:58.838 02:16:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:58.838 02:16:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:58.838 02:16:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:58.838 02:16:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:58.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:58.838 00:14:58.838 --- 10.0.0.2 ping statistics --- 00:14:58.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.838 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:58.838 02:16:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:58.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:58.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:58.838 00:14:58.838 --- 10.0.0.3 ping statistics --- 00:14:58.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.838 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:58.838 02:16:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:58.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:58.838 00:14:58.838 --- 10.0.0.1 ping statistics --- 00:14:58.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.838 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:58.838 02:16:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.838 02:16:58 -- nvmf/common.sh@421 -- # return 0 00:14:58.838 02:16:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:58.838 02:16:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.838 02:16:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:58.838 02:16:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:58.838 02:16:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.838 02:16:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:58.838 02:16:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:59.097 02:16:58 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:59.097 02:16:58 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:59.097 02:16:58 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:59.097 02:16:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:59.097 02:16:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:59.097 02:16:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.097 02:16:58 -- nvmf/common.sh@469 -- # nvmfpid=84737 00:14:59.097 02:16:58 -- nvmf/common.sh@470 -- # waitforlisten 84737 00:14:59.097 02:16:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.097 02:16:58 -- common/autotest_common.sh@819 -- # '[' -z 84737 ']' 00:14:59.097 02:16:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.097 02:16:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:59.097 02:16:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.097 02:16:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:59.097 02:16:58 -- common/autotest_common.sh@10 -- # set +x 00:14:59.097 [2024-07-15 02:16:58.457473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:59.097 [2024-07-15 02:16:58.457585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.097 [2024-07-15 02:16:58.599888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.356 [2024-07-15 02:16:58.683171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:59.356 [2024-07-15 02:16:58.683326] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.356 [2024-07-15 02:16:58.683347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.356 [2024-07-15 02:16:58.683360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.356 [2024-07-15 02:16:58.683727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.356 [2024-07-15 02:16:58.683814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.356 [2024-07-15 02:16:58.683958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.356 [2024-07-15 02:16:58.684428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.294 02:16:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:00.294 02:16:59 -- common/autotest_common.sh@852 -- # return 0 00:15:00.294 02:16:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:00.294 02:16:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:00.294 02:16:59 -- common/autotest_common.sh@10 -- # set +x 00:15:00.294 02:16:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.294 02:16:59 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.294 [2024-07-15 02:16:59.758427] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.294 02:16:59 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:00.560 Malloc0 00:15:00.560 02:17:00 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:00.818 02:17:00 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.076 02:17:00 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.335 [2024-07-15 02:17:00.826299] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.335 02:17:00 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:01.594 [2024-07-15 02:17:01.042624] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:01.594 02:17:01 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:01.852 02:17:01 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:02.111 02:17:01 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.111 02:17:01 -- common/autotest_common.sh@1177 -- # local i=0 00:15:02.111 02:17:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.111 02:17:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:02.111 02:17:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:04.012 02:17:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:04.012 02:17:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:04.012 02:17:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.012 02:17:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:04.012 02:17:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.012 02:17:03 -- common/autotest_common.sh@1187 -- # return 0 00:15:04.012 02:17:03 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:04.012 02:17:03 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:04.012 02:17:03 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:04.012 02:17:03 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:04.012 02:17:03 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:04.012 02:17:03 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:04.012 02:17:03 -- target/multipath.sh@38 -- # return 0 00:15:04.012 02:17:03 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:04.012 02:17:03 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:04.012 02:17:03 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:04.012 02:17:03 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:04.012 02:17:03 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:04.012 02:17:03 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:04.012 02:17:03 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:04.012 02:17:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:04.012 02:17:03 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.012 02:17:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:04.012 02:17:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:04.012 02:17:03 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:04.012 02:17:03 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:04.012 02:17:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:04.012 02:17:03 -- target/multipath.sh@22 -- # local timeout=20 00:15:04.012 02:17:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:04.012 02:17:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:04.012 02:17:03 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:04.012 02:17:03 -- target/multipath.sh@85 -- # echo numa 00:15:04.012 02:17:03 -- target/multipath.sh@88 -- # fio_pid=84880 00:15:04.012 02:17:03 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:04.012 02:17:03 -- target/multipath.sh@90 -- # sleep 1 00:15:04.012 [global] 00:15:04.012 thread=1 00:15:04.012 invalidate=1 00:15:04.012 rw=randrw 00:15:04.012 time_based=1 00:15:04.012 runtime=6 00:15:04.012 ioengine=libaio 00:15:04.012 direct=1 00:15:04.012 bs=4096 00:15:04.012 iodepth=128 00:15:04.012 norandommap=0 00:15:04.012 numjobs=1 00:15:04.012 00:15:04.012 verify_dump=1 00:15:04.012 verify_backlog=512 00:15:04.012 verify_state_save=0 00:15:04.012 do_verify=1 00:15:04.012 verify=crc32c-intel 00:15:04.012 [job0] 00:15:04.012 filename=/dev/nvme0n1 00:15:04.012 Could not set queue depth (nvme0n1) 00:15:04.270 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:04.270 fio-3.35 00:15:04.270 Starting 1 thread 00:15:05.204 02:17:04 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:05.462 02:17:04 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:05.720 02:17:05 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:05.720 02:17:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:05.720 02:17:05 -- target/multipath.sh@22 -- # local timeout=20 00:15:05.720 02:17:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:05.720 02:17:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:05.720 02:17:05 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:05.720 02:17:05 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:05.720 02:17:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:05.720 02:17:05 -- target/multipath.sh@22 -- # local timeout=20 00:15:05.720 02:17:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:05.720 02:17:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:05.720 02:17:05 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:05.720 02:17:05 -- target/multipath.sh@25 -- # sleep 1s 00:15:06.654 02:17:06 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:06.654 02:17:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:06.654 02:17:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:06.654 02:17:06 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:06.913 02:17:06 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:07.172 02:17:06 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:07.172 02:17:06 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:07.172 02:17:06 -- target/multipath.sh@22 -- # local timeout=20 00:15:07.172 02:17:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:07.172 02:17:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:07.172 02:17:06 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:07.172 02:17:06 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:07.172 02:17:06 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:07.172 02:17:06 -- target/multipath.sh@22 -- # local timeout=20 00:15:07.172 02:17:06 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:07.172 02:17:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:07.172 02:17:06 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:07.172 02:17:06 -- target/multipath.sh@25 -- # sleep 1s 00:15:08.547 02:17:07 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:08.548 02:17:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:08.548 02:17:07 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:08.548 02:17:07 -- target/multipath.sh@104 -- # wait 84880 00:15:10.526 00:15:10.526 job0: (groupid=0, jobs=1): err= 0: pid=84901: Mon Jul 15 02:17:09 2024 00:15:10.526 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(261MiB/6005msec) 00:15:10.526 slat (usec): min=3, max=5358, avg=50.52, stdev=228.09 00:15:10.526 clat (usec): min=1974, max=14545, avg=7773.09, stdev=1347.50 00:15:10.526 lat (usec): min=1988, max=14653, avg=7823.62, stdev=1356.22 00:15:10.526 clat percentiles (usec): 00:15:10.526 | 1.00th=[ 4555], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 6718], 00:15:10.526 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 7963], 00:15:10.526 | 70.00th=[ 8291], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10159], 00:15:10.526 | 99.00th=[11731], 99.50th=[12387], 99.90th=[13435], 99.95th=[14091], 00:15:10.526 | 99.99th=[14353] 00:15:10.526 bw ( KiB/s): min=12144, max=30360, per=52.87%, avg=23552.73, stdev=5246.18, samples=11 00:15:10.526 iops : min= 3036, max= 7590, avg=5888.18, stdev=1311.54, samples=11 00:15:10.526 write: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(141MiB/5461msec); 0 zone resets 00:15:10.526 slat (usec): min=4, max=2584, avg=63.34, stdev=161.85 00:15:10.526 clat (usec): min=953, max=14341, avg=6715.33, stdev=1110.47 00:15:10.526 lat (usec): min=1014, max=14365, avg=6778.67, stdev=1114.17 00:15:10.526 clat percentiles (usec): 00:15:10.526 | 1.00th=[ 3654], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 5997], 00:15:10.526 | 30.00th=[ 6259], 40.00th=[ 6521], 50.00th=[ 6718], 60.00th=[ 6915], 00:15:10.526 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8455], 00:15:10.526 | 99.00th=[10028], 99.50th=[10814], 99.90th=[12780], 99.95th=[13042], 00:15:10.526 | 99.99th=[14222] 00:15:10.526 bw ( KiB/s): min=12536, max=29584, per=89.32%, avg=23567.27, stdev=4931.97, samples=11 00:15:10.526 iops : min= 3134, max= 7396, avg=5891.82, stdev=1232.99, samples=11 00:15:10.526 lat (usec) : 1000=0.01% 00:15:10.526 lat (msec) : 2=0.01%, 4=0.91%, 10=95.09%, 20=3.99% 00:15:10.526 cpu : usr=5.40%, sys=22.86%, ctx=6195, majf=0, minf=108 00:15:10.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:10.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:10.526 issued rwts: total=66873,36024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:10.526 00:15:10.526 Run status group 0 (all jobs): 00:15:10.526 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=261MiB (274MB), run=6005-6005msec 00:15:10.526 WRITE: bw=25.8MiB/s (27.0MB/s), 25.8MiB/s-25.8MiB/s (27.0MB/s-27.0MB/s), io=141MiB (148MB), run=5461-5461msec 00:15:10.526 00:15:10.526 Disk stats (read/write): 00:15:10.526 nvme0n1: ios=66073/35102, merge=0/0, ticks=481520/220690, in_queue=702210, util=98.70% 00:15:10.526 02:17:09 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:10.784 02:17:10 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:11.042 02:17:10 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:11.042 02:17:10 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:11.042 02:17:10 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.042 02:17:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:11.042 02:17:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:11.042 02:17:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:11.042 02:17:10 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:11.042 02:17:10 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:11.042 02:17:10 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.042 02:17:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:11.042 02:17:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.042 02:17:10 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:11.042 02:17:10 -- target/multipath.sh@25 -- # sleep 1s 00:15:11.976 02:17:11 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:11.976 02:17:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.976 02:17:11 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:11.976 02:17:11 -- target/multipath.sh@113 -- # echo round-robin 00:15:11.976 02:17:11 -- target/multipath.sh@116 -- # fio_pid=85029 00:15:11.976 02:17:11 -- target/multipath.sh@118 -- # sleep 1 00:15:11.976 02:17:11 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:11.976 [global] 00:15:11.976 thread=1 00:15:11.976 invalidate=1 00:15:11.976 rw=randrw 00:15:11.976 time_based=1 00:15:11.976 runtime=6 00:15:11.976 ioengine=libaio 00:15:11.976 direct=1 00:15:11.976 bs=4096 00:15:11.976 iodepth=128 00:15:11.976 norandommap=0 00:15:11.976 numjobs=1 00:15:11.976 00:15:11.976 verify_dump=1 00:15:11.976 verify_backlog=512 00:15:11.976 verify_state_save=0 00:15:11.976 do_verify=1 00:15:11.976 verify=crc32c-intel 00:15:11.976 [job0] 00:15:11.976 filename=/dev/nvme0n1 00:15:11.976 Could not set queue depth (nvme0n1) 00:15:12.234 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:12.234 fio-3.35 00:15:12.234 Starting 1 thread 00:15:13.169 02:17:12 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:13.169 02:17:12 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:13.735 02:17:12 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:13.735 02:17:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:13.735 02:17:12 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.735 02:17:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:13.735 02:17:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:13.735 02:17:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:13.735 02:17:12 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:13.735 02:17:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:13.735 02:17:12 -- target/multipath.sh@22 -- # local timeout=20 00:15:13.735 02:17:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:13.735 02:17:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:13.735 02:17:12 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:13.735 02:17:12 -- target/multipath.sh@25 -- # sleep 1s 00:15:14.668 02:17:13 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:14.668 02:17:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.668 02:17:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:14.668 02:17:13 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:14.926 02:17:14 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:14.926 02:17:14 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:14.926 02:17:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:14.927 02:17:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:14.927 02:17:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:14.927 02:17:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:14.927 02:17:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:14.927 02:17:14 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:14.927 02:17:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:14.927 02:17:14 -- target/multipath.sh@22 -- # local timeout=20 00:15:14.927 02:17:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:14.927 02:17:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:14.927 02:17:14 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:14.927 02:17:14 -- target/multipath.sh@25 -- # sleep 1s 00:15:16.302 02:17:15 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:16.302 02:17:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.302 02:17:15 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:16.302 02:17:15 -- target/multipath.sh@132 -- # wait 85029 00:15:18.204 00:15:18.204 job0: (groupid=0, jobs=1): err= 0: pid=85056: Mon Jul 15 02:17:17 2024 00:15:18.204 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(261MiB/6007msec) 00:15:18.204 slat (usec): min=6, max=6212, avg=45.59, stdev=219.13 00:15:18.205 clat (usec): min=256, max=20490, avg=7975.62, stdev=1912.52 00:15:18.205 lat (usec): min=383, max=20502, avg=8021.21, stdev=1920.75 00:15:18.205 clat percentiles (usec): 00:15:18.205 | 1.00th=[ 2966], 5.00th=[ 4883], 10.00th=[ 6063], 20.00th=[ 6783], 00:15:18.205 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8225], 00:15:18.205 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[11469], 00:15:18.205 | 99.00th=[13698], 99.50th=[15008], 99.90th=[17171], 99.95th=[17695], 00:15:18.205 | 99.99th=[19530] 00:15:18.205 bw ( KiB/s): min= 8032, max=36208, per=51.92%, avg=23077.09, stdev=8175.39, samples=11 00:15:18.205 iops : min= 2008, max= 9052, avg=5769.27, stdev=2043.85, samples=11 00:15:18.205 write: IOPS=6618, BW=25.9MiB/s (27.1MB/s)(136MiB/5252msec); 0 zone resets 00:15:18.205 slat (usec): min=12, max=2393, avg=55.47, stdev=139.66 00:15:18.205 clat (usec): min=483, max=17578, avg=6622.62, stdev=1721.16 00:15:18.205 lat (usec): min=524, max=17607, avg=6678.09, stdev=1728.52 00:15:18.205 clat percentiles (usec): 00:15:18.205 | 1.00th=[ 2606], 5.00th=[ 3556], 10.00th=[ 4228], 20.00th=[ 5473], 00:15:18.205 | 30.00th=[ 6128], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6980], 00:15:18.205 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 9634], 00:15:18.205 | 99.00th=[11207], 99.50th=[12387], 99.90th=[15401], 99.95th=[15795], 00:15:18.205 | 99.99th=[17171] 00:15:18.205 bw ( KiB/s): min= 8192, max=35616, per=87.42%, avg=23143.91, stdev=7887.89, samples=11 00:15:18.205 iops : min= 2048, max= 8904, avg=5785.91, stdev=1971.94, samples=11 00:15:18.205 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:15:18.205 lat (msec) : 2=0.30%, 4=4.14%, 10=86.91%, 20=8.60%, 50=0.01% 00:15:18.205 cpu : usr=6.23%, sys=23.18%, ctx=6215, majf=0, minf=121 00:15:18.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:18.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.205 issued rwts: total=66747,34759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.205 00:15:18.205 Run status group 0 (all jobs): 00:15:18.205 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=261MiB (273MB), run=6007-6007msec 00:15:18.205 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=136MiB (142MB), run=5252-5252msec 00:15:18.205 00:15:18.205 Disk stats (read/write): 00:15:18.205 nvme0n1: ios=65836/34165, merge=0/0, ticks=492124/211612, in_queue=703736, util=98.70% 00:15:18.205 02:17:17 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:18.462 02:17:17 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.462 02:17:17 -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.462 02:17:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:18.462 02:17:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.462 02:17:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.462 02:17:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:18.462 02:17:17 -- common/autotest_common.sh@1210 -- # return 0 00:15:18.462 02:17:17 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.721 02:17:18 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:18.721 02:17:18 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:18.721 02:17:18 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:18.721 02:17:18 -- target/multipath.sh@144 -- # nvmftestfini 00:15:18.721 02:17:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:18.721 02:17:18 -- nvmf/common.sh@116 -- # sync 00:15:18.721 02:17:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:18.721 02:17:18 -- nvmf/common.sh@119 -- # set +e 00:15:18.721 02:17:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:18.721 02:17:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:18.721 rmmod nvme_tcp 00:15:18.980 rmmod nvme_fabrics 00:15:18.980 rmmod nvme_keyring 00:15:18.980 02:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:18.980 02:17:18 -- nvmf/common.sh@123 -- # set -e 00:15:18.980 02:17:18 -- nvmf/common.sh@124 -- # return 0 00:15:18.980 02:17:18 -- nvmf/common.sh@477 -- # '[' -n 84737 ']' 00:15:18.980 02:17:18 -- nvmf/common.sh@478 -- # killprocess 84737 00:15:18.980 02:17:18 -- common/autotest_common.sh@926 -- # '[' -z 84737 ']' 00:15:18.980 02:17:18 -- common/autotest_common.sh@930 -- # kill -0 84737 00:15:18.980 02:17:18 -- common/autotest_common.sh@931 -- # uname 00:15:18.980 02:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:18.980 02:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84737 00:15:18.980 killing process with pid 84737 00:15:18.980 02:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:18.980 02:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:18.980 02:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84737' 00:15:18.980 02:17:18 -- common/autotest_common.sh@945 -- # kill 84737 00:15:18.980 02:17:18 -- common/autotest_common.sh@950 -- # wait 84737 00:15:19.238 02:17:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.238 02:17:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.238 02:17:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.238 02:17:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.238 02:17:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.238 02:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.238 02:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.238 02:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.238 02:17:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:19.238 00:15:19.238 real 0m20.726s 00:15:19.239 user 1m20.899s 00:15:19.239 sys 0m6.750s 00:15:19.239 02:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.239 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:15:19.239 ************************************ 00:15:19.239 END TEST nvmf_multipath 00:15:19.239 ************************************ 00:15:19.239 02:17:18 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:19.239 02:17:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:19.239 02:17:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:19.239 02:17:18 -- common/autotest_common.sh@10 -- # set +x 00:15:19.239 ************************************ 00:15:19.239 START TEST nvmf_zcopy 00:15:19.239 ************************************ 00:15:19.239 02:17:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:19.239 * Looking for test storage... 00:15:19.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:19.239 02:17:18 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.239 02:17:18 -- nvmf/common.sh@7 -- # uname -s 00:15:19.239 02:17:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.239 02:17:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.239 02:17:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.239 02:17:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.239 02:17:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.239 02:17:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.239 02:17:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.239 02:17:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.239 02:17:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.239 02:17:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.239 02:17:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:19.239 02:17:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:19.239 02:17:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.239 02:17:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.497 02:17:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.497 02:17:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.497 02:17:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.497 02:17:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.497 02:17:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.497 02:17:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.497 02:17:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.498 02:17:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.498 02:17:18 -- paths/export.sh@5 -- # export PATH 00:15:19.498 02:17:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.498 02:17:18 -- nvmf/common.sh@46 -- # : 0 00:15:19.498 02:17:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.498 02:17:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.498 02:17:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.498 02:17:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.498 02:17:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.498 02:17:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.498 02:17:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.498 02:17:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.498 02:17:18 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:19.498 02:17:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:19.498 02:17:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.498 02:17:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:19.498 02:17:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:19.498 02:17:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:19.498 02:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.498 02:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.498 02:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.498 02:17:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:19.498 02:17:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:19.498 02:17:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:19.498 02:17:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:19.498 02:17:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:19.498 02:17:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:19.498 02:17:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.498 02:17:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.498 02:17:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.498 02:17:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:19.498 02:17:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.498 02:17:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.498 02:17:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.498 02:17:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.498 02:17:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.498 02:17:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.498 02:17:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.498 02:17:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.498 02:17:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:19.498 02:17:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:19.498 Cannot find device "nvmf_tgt_br" 00:15:19.498 02:17:18 -- nvmf/common.sh@154 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.498 Cannot find device "nvmf_tgt_br2" 00:15:19.498 02:17:18 -- nvmf/common.sh@155 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:19.498 02:17:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:19.498 Cannot find device "nvmf_tgt_br" 00:15:19.498 02:17:18 -- nvmf/common.sh@157 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:19.498 Cannot find device "nvmf_tgt_br2" 00:15:19.498 02:17:18 -- nvmf/common.sh@158 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:19.498 02:17:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:19.498 02:17:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.498 02:17:18 -- nvmf/common.sh@161 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.498 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.498 02:17:18 -- nvmf/common.sh@162 -- # true 00:15:19.498 02:17:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.498 02:17:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.498 02:17:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.498 02:17:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.498 02:17:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.498 02:17:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.498 02:17:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.498 02:17:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:19.498 02:17:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:19.498 02:17:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:19.498 02:17:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:19.498 02:17:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:19.498 02:17:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:19.498 02:17:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.498 02:17:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.498 02:17:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.498 02:17:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:19.498 02:17:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:19.757 02:17:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.757 02:17:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.757 02:17:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.757 02:17:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.757 02:17:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.757 02:17:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:19.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:19.757 00:15:19.757 --- 10.0.0.2 ping statistics --- 00:15:19.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.757 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:19.757 02:17:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:19.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:19.757 00:15:19.757 --- 10.0.0.3 ping statistics --- 00:15:19.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.757 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:19.757 02:17:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:19.757 00:15:19.757 --- 10.0.0.1 ping statistics --- 00:15:19.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.757 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:19.757 02:17:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.757 02:17:19 -- nvmf/common.sh@421 -- # return 0 00:15:19.757 02:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:19.757 02:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.757 02:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:19.757 02:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:19.757 02:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.757 02:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:19.757 02:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:19.757 02:17:19 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:19.757 02:17:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:19.757 02:17:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:19.757 02:17:19 -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 02:17:19 -- nvmf/common.sh@469 -- # nvmfpid=85331 00:15:19.757 02:17:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.757 02:17:19 -- nvmf/common.sh@470 -- # waitforlisten 85331 00:15:19.757 02:17:19 -- common/autotest_common.sh@819 -- # '[' -z 85331 ']' 00:15:19.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.757 02:17:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.757 02:17:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:19.757 02:17:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.757 02:17:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:19.757 02:17:19 -- common/autotest_common.sh@10 -- # set +x 00:15:19.757 [2024-07-15 02:17:19.200753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:19.757 [2024-07-15 02:17:19.201063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.016 [2024-07-15 02:17:19.339107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.016 [2024-07-15 02:17:19.438876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.016 [2024-07-15 02:17:19.439399] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.016 [2024-07-15 02:17:19.439514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.016 [2024-07-15 02:17:19.439581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.016 [2024-07-15 02:17:19.439756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.583 02:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:20.583 02:17:20 -- common/autotest_common.sh@852 -- # return 0 00:15:20.583 02:17:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:20.583 02:17:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:20.583 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.583 02:17:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.583 02:17:20 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:20.583 02:17:20 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:20.583 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.583 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.583 [2024-07-15 02:17:20.133119] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.583 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.583 02:17:20 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:20.583 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.842 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.842 02:17:20 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.842 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.842 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 [2024-07-15 02:17:20.157244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.842 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.842 02:17:20 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.842 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.842 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.842 02:17:20 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:20.842 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.842 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 malloc0 00:15:20.842 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.842 02:17:20 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:20.842 02:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:20.842 02:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 02:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:20.842 02:17:20 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:20.842 02:17:20 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:20.842 02:17:20 -- nvmf/common.sh@520 -- # config=() 00:15:20.842 02:17:20 -- nvmf/common.sh@520 -- # local subsystem config 00:15:20.842 02:17:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:20.842 02:17:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:20.842 { 00:15:20.842 "params": { 00:15:20.842 "name": "Nvme$subsystem", 00:15:20.842 "trtype": "$TEST_TRANSPORT", 00:15:20.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.842 "adrfam": "ipv4", 00:15:20.842 "trsvcid": "$NVMF_PORT", 00:15:20.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.842 "hdgst": ${hdgst:-false}, 00:15:20.842 "ddgst": ${ddgst:-false} 00:15:20.842 }, 00:15:20.842 "method": "bdev_nvme_attach_controller" 00:15:20.842 } 00:15:20.842 EOF 00:15:20.842 )") 00:15:20.842 02:17:20 -- nvmf/common.sh@542 -- # cat 00:15:20.842 02:17:20 -- nvmf/common.sh@544 -- # jq . 00:15:20.842 02:17:20 -- nvmf/common.sh@545 -- # IFS=, 00:15:20.842 02:17:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:20.842 "params": { 00:15:20.842 "name": "Nvme1", 00:15:20.842 "trtype": "tcp", 00:15:20.842 "traddr": "10.0.0.2", 00:15:20.842 "adrfam": "ipv4", 00:15:20.842 "trsvcid": "4420", 00:15:20.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.842 "hdgst": false, 00:15:20.842 "ddgst": false 00:15:20.842 }, 00:15:20.842 "method": "bdev_nvme_attach_controller" 00:15:20.842 }' 00:15:20.842 [2024-07-15 02:17:20.258965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:20.842 [2024-07-15 02:17:20.259119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85382 ] 00:15:21.101 [2024-07-15 02:17:20.401209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.101 [2024-07-15 02:17:20.489985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.359 Running I/O for 10 seconds... 00:15:31.383 00:15:31.383 Latency(us) 00:15:31.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.383 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:31.383 Verification LBA range: start 0x0 length 0x1000 00:15:31.383 Nvme1n1 : 10.01 9470.70 73.99 0.00 0.00 13482.01 1414.98 20018.27 00:15:31.383 =================================================================================================================== 00:15:31.383 Total : 9470.70 73.99 0.00 0.00 13482.01 1414.98 20018.27 00:15:31.383 02:17:30 -- target/zcopy.sh@39 -- # perfpid=85500 00:15:31.383 02:17:30 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:31.383 02:17:30 -- common/autotest_common.sh@10 -- # set +x 00:15:31.383 02:17:30 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:31.383 02:17:30 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:31.383 02:17:30 -- nvmf/common.sh@520 -- # config=() 00:15:31.383 02:17:30 -- nvmf/common.sh@520 -- # local subsystem config 00:15:31.383 02:17:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:31.383 02:17:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:31.383 { 00:15:31.383 "params": { 00:15:31.383 "name": "Nvme$subsystem", 00:15:31.383 "trtype": "$TEST_TRANSPORT", 00:15:31.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:31.383 "adrfam": "ipv4", 00:15:31.383 "trsvcid": "$NVMF_PORT", 00:15:31.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:31.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:31.383 "hdgst": ${hdgst:-false}, 00:15:31.383 "ddgst": ${ddgst:-false} 00:15:31.383 }, 00:15:31.383 "method": "bdev_nvme_attach_controller" 00:15:31.383 } 00:15:31.383 EOF 00:15:31.383 )") 00:15:31.383 02:17:30 -- nvmf/common.sh@542 -- # cat 00:15:31.383 [2024-07-15 02:17:30.911233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.383 [2024-07-15 02:17:30.911325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.383 02:17:30 -- nvmf/common.sh@544 -- # jq . 00:15:31.383 02:17:30 -- nvmf/common.sh@545 -- # IFS=, 00:15:31.383 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.383 02:17:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:31.383 "params": { 00:15:31.383 "name": "Nvme1", 00:15:31.383 "trtype": "tcp", 00:15:31.383 "traddr": "10.0.0.2", 00:15:31.383 "adrfam": "ipv4", 00:15:31.383 "trsvcid": "4420", 00:15:31.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.383 "hdgst": false, 00:15:31.383 "ddgst": false 00:15:31.383 }, 00:15:31.383 "method": "bdev_nvme_attach_controller" 00:15:31.383 }' 00:15:31.383 [2024-07-15 02:17:30.923122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.383 [2024-07-15 02:17:30.923163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.383 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.383 [2024-07-15 02:17:30.935116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.383 [2024-07-15 02:17:30.935143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:30.947113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:30.947134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:30.959117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:30.959139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:30.965914] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:31.658 [2024-07-15 02:17:30.966075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85500 ] 00:15:31.658 [2024-07-15 02:17:30.971116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:30.971138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:30.983125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:30.983169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:30.995132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:30.995152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.007129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.007152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.019133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.019155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.031137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.031173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.043145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.043182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.051139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.051160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.059136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.059157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.071150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.071171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.079145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.079167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.087149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.087170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.095152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.095174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.103985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.658 [2024-07-15 02:17:31.107190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.107239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.115174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.115217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.123168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.123190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.135188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.135253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.143165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.143185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.151184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.151214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.163180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.163243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.175213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.175250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.187199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.187234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.658 [2024-07-15 02:17:31.199218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.658 [2024-07-15 02:17:31.199254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.658 [2024-07-15 02:17:31.199817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.658 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.211228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.211281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.223218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.223244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.235221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.235258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.247237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.247274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.259265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.259289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.271271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.271293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.283265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.283285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.295299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.295318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.307253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.307273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.319333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.319361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.331301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.331324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.343322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.343345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.355332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.355355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.367327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.367362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.379324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.379347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.946 Running I/O for 5 seconds... 00:15:31.946 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.946 [2024-07-15 02:17:31.395266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.946 [2024-07-15 02:17:31.395309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.413539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.413565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.428432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.428476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.444789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.444831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.455321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.455360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.470298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.470336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:31.947 [2024-07-15 02:17:31.487750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:31.947 [2024-07-15 02:17:31.487775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:31.947 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.503023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.503053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.521471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.521499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.537008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.537037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.553966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.554003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.569402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.569427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.581079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.581109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.595715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.595740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.613611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.613651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.628560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.628587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.638536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.638562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.652784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.652812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.670892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.670920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.686523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.686551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.702983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.703024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.720381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.720410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.205 [2024-07-15 02:17:31.735021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.205 [2024-07-15 02:17:31.735048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.205 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.206 [2024-07-15 02:17:31.750677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.206 [2024-07-15 02:17:31.750703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.206 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.767429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.767457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.783083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.783110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.800986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.801031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.815670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.815697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.831336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.831368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.848272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.848303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.865099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.865129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.880435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.880473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.889743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.889773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.905992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.906033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.923111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.923141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.938238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.938268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.948335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.948364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.964050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.964093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.464 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.464 [2024-07-15 02:17:31.979661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.464 [2024-07-15 02:17:31.979713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.465 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.465 [2024-07-15 02:17:31.990172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.465 [2024-07-15 02:17:31.990231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.465 2024/07/15 02:17:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.465 [2024-07-15 02:17:32.004256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.465 [2024-07-15 02:17:32.004285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.465 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.020883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.723 [2024-07-15 02:17:32.020938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.723 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.036869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.723 [2024-07-15 02:17:32.036925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.723 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.047273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.723 [2024-07-15 02:17:32.047317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.723 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.062159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.723 [2024-07-15 02:17:32.062191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.723 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.081625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.723 [2024-07-15 02:17:32.081673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.723 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.723 [2024-07-15 02:17:32.096823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.096883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.114815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.114857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.129974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.130013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.140240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.140282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.155226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.155285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.171480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.171508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.181815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.181842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.191658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.191684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.201285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.201312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.216576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.216618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.225723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.225749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.241671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.241727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.259044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.259086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.724 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.724 [2024-07-15 02:17:32.276038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.724 [2024-07-15 02:17:32.276069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.292132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.292161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.309731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.309758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.325748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.325775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.342617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.342659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.357634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.357677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.375254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.375281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.391160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.391189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.407377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.407404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.425007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.425038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.439763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.439789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.451179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.451221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.468773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.468802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.483570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.483616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.500431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.500461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.516959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.516987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.983 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:32.983 [2024-07-15 02:17:32.534169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:32.983 [2024-07-15 02:17:32.534196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:32.984 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.549811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.549838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.567501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.567531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.583911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.583941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.601337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.601370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.617302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.617329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.635108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.635150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.651200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.651258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.668250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.668280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.683894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.683922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.701284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.701313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.716265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.716296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.734254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.734311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.749257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.749285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.765371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.765399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.780828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.780872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.243 [2024-07-15 02:17:32.790413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.243 [2024-07-15 02:17:32.790439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.243 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.806649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.806700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.822766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.822792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.838527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.838554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.848195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.848245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.862783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.862808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.879298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.879325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.894957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.894984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.907509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.907537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.917767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.917793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.933508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.933535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.949952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.949996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.965454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.965481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.982300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.982328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:32.997869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:32.997895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:33.016343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:33.016372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:33.029936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:33.029963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.503 [2024-07-15 02:17:33.046738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.503 [2024-07-15 02:17:33.046764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.503 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.060488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.060516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.077897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.077926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.093267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.093294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.104839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.104865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.121151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.121177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.138598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.138639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.154480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.154507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.172568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.172613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.186229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.186272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.202064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.202090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.219491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.219517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.235080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.235106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.252621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.252662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.267788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.267813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.278551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.278577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.286481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.286507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.298125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.298151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:33.763 [2024-07-15 02:17:33.310363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:33.763 [2024-07-15 02:17:33.310388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:33.763 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.320228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.320273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.330317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.330343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.339876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.339902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.349589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.349629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.361966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.361992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.369852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.369878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.381080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.381107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.021 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.021 [2024-07-15 02:17:33.393719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.021 [2024-07-15 02:17:33.393745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.404833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.404858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.412868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.412893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.423935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.423961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.433023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.433049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.442321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.442347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.451536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.451562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.465669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.465694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.475389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.475415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.484729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.484754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.494790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.494817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.504270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.504296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.513548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.513574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.523121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.523147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.532735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.532760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.542433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.542460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.551553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.551579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.022 [2024-07-15 02:17:33.569100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.022 [2024-07-15 02:17:33.569127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.022 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.578847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.578890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.588290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.588317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.598077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.598104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.606922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.606948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.616206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.616271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.626089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.626117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.636051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.636078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.646115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.646142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.656525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.656578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.665857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.665883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.675247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.675273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.685560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.685604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.696283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.696314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.707061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.707096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.717911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.717944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.728750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.728781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.741688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.741720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.751607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.751661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.762556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.762590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.775905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.775950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.785894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.785925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.796682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.796711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.809095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.809128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.818437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.818466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.281 [2024-07-15 02:17:33.829765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.281 [2024-07-15 02:17:33.829796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.281 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.845546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.845614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.855377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.855407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.866040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.866071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.878564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.878619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.895851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.895901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.906380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.906413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.917484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.917519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.930108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.930139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.940494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.940525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.955741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.955779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.966250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.966281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.977768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.977814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.540 [2024-07-15 02:17:33.989020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.540 [2024-07-15 02:17:33.989055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.540 2024/07/15 02:17:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.001876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.001916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.018889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.018948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.029362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.029401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.040262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.040298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.056728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.056782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.074258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.074319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.541 [2024-07-15 02:17:34.084808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.541 [2024-07-15 02:17:34.084847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.541 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.096483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.096520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.107381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.107436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.118544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.118609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.131476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.131516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.146877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.146918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.156390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.156428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.168392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.168434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.179189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.179222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.191748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.191786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.201852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.201887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.212766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.212799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.225934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.225980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.242975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.243029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.258132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.258186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.270223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.270256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.287454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.287488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.302221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.302266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.310940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.310969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.327619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.327663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:34.800 [2024-07-15 02:17:34.344780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:34.800 [2024-07-15 02:17:34.344820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:34.800 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.360813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.360855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.380020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.380063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.394416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.394444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.411982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.412009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.427271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.427298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.438196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.438242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.455248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.455275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.470603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.470642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.481482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.481510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.496426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.496455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.507582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.507629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.523072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.523108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.538325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.538382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.548845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.548871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.562822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.562847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.578673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.578698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.595182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.595208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.059 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.059 [2024-07-15 02:17:34.612148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.059 [2024-07-15 02:17:34.612173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.629633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.629671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.645490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.645516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.662226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.662251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.678408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.678433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.690479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.690504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.705908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.705934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.722128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.722153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.734225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.734267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.743745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.743771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.755056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.755082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.773865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.773908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.787925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.787953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.803456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.803481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.819230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.819256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.836457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.836486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.853898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.853924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.319 [2024-07-15 02:17:34.869688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.319 [2024-07-15 02:17:34.869714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.319 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.884962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.884996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.897140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.897167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.914037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.914064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.929314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.929339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.946026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.946051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.962947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.962974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.979977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.980019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:34.994765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:34.994791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.010859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.010885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.028515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.028544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.044650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.044704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.055879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.055922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.072830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.072855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.088463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.088489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.105949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.105980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.120014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.120041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.578 [2024-07-15 02:17:35.129282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.578 [2024-07-15 02:17:35.129308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.578 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.139643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.139670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.149311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.149337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.163586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.163646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.178344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.178371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.194431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.194457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.211596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.211643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.228544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.228615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.245437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.245481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.837 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.837 [2024-07-15 02:17:35.262622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.837 [2024-07-15 02:17:35.262665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.279513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.279539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.294951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.295007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.311552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.311581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.326625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.326685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.343285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.343312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.359066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.359096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.838 [2024-07-15 02:17:35.376455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.838 [2024-07-15 02:17:35.376486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.838 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.392882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.392912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.409539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.409566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.425285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.425313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.436244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.436285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.451197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.451251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.468512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.468543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.484320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.484349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.501993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.502024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.517966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.517998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.534976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.535007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.551454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.551486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.098 [2024-07-15 02:17:35.567210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.098 [2024-07-15 02:17:35.567257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.098 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.584969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.585010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.099 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.599969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.600012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.099 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.610999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.611028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.099 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.627071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.627100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.099 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.642816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.642844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.099 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.099 [2024-07-15 02:17:35.652371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.099 [2024-07-15 02:17:35.652401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.667920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.667968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.683445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.683474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.700056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.700086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.716111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.716142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.733173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.733209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.748882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.748926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.766049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.766079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.782226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.782285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.798904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.359 [2024-07-15 02:17:35.798956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.359 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.359 [2024-07-15 02:17:35.815149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.815189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.831704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.831738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.846729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.846770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.862090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.862149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.872342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.872383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.886773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.886832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.360 [2024-07-15 02:17:35.903737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.360 [2024-07-15 02:17:35.903805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.360 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:35.919324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.619 [2024-07-15 02:17:35.919390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.619 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:35.937021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.619 [2024-07-15 02:17:35.937072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.619 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:35.952240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.619 [2024-07-15 02:17:35.952309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.619 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:35.969693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.619 [2024-07-15 02:17:35.969749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.619 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:35.984770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.619 [2024-07-15 02:17:35.984818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.619 2024/07/15 02:17:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.619 [2024-07-15 02:17:36.000833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.000892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.018921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.018984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.034565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.034649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.050070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.050127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.066097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.066161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.082610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.082673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.099739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.099800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.110299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.110340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.124563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.124647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.140004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.140072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.157867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.157935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.620 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.620 [2024-07-15 02:17:36.172681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.620 [2024-07-15 02:17:36.172751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.188234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.188299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.198388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.198433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.212672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.212726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.224460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.224513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.241766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.241827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.256166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.256205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.272846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.272907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.283361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.283413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.298122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.298184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.315973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.879 [2024-07-15 02:17:36.316037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.879 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.879 [2024-07-15 02:17:36.332255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.332327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.347700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.347763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.363628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.363687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.380094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.380157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.391796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.391852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 00:15:36.880 Latency(us) 00:15:36.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.880 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:36.880 Nvme1n1 : 5.01 12072.57 94.32 0.00 0.00 10589.78 4230.05 20971.52 00:15:36.880 =================================================================================================================== 00:15:36.880 Total : 12072.57 94.32 0.00 0.00 10589.78 4230.05 20971.52 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.403776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.403829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.411761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.411802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.880 [2024-07-15 02:17:36.423795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.880 [2024-07-15 02:17:36.423841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.880 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.435802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.435858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.447799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.447841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.459784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.459830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.471798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.471840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.483800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.483847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.495792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.495839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.507791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.140 [2024-07-15 02:17:36.507824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.140 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.140 [2024-07-15 02:17:36.519797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.519831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.531801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.531846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.543813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.543862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.555808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.555852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.567822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.567870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.579839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.579888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.591840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.591889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 [2024-07-15 02:17:36.603830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.141 [2024-07-15 02:17:36.603872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.141 2024/07/15 02:17:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.141 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85500) - No such process 00:15:37.141 02:17:36 -- target/zcopy.sh@49 -- # wait 85500 00:15:37.141 02:17:36 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:37.141 02:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.141 02:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.141 02:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.141 02:17:36 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:37.141 02:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.141 02:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.141 delay0 00:15:37.141 02:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.141 02:17:36 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:37.141 02:17:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.141 02:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:37.141 02:17:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.141 02:17:36 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:37.399 [2024-07-15 02:17:36.805502] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:45.514 Initializing NVMe Controllers 00:15:45.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:45.514 Initialization complete. Launching workers. 00:15:45.514 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 316, failed: 5661 00:15:45.514 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5932, failed to submit 45 00:15:45.514 success 5756, unsuccess 176, failed 0 00:15:45.514 02:17:43 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:45.514 02:17:43 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:45.514 02:17:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.514 02:17:43 -- nvmf/common.sh@116 -- # sync 00:15:45.514 02:17:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.514 02:17:43 -- nvmf/common.sh@119 -- # set +e 00:15:45.514 02:17:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.514 02:17:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.514 rmmod nvme_tcp 00:15:45.514 rmmod nvme_fabrics 00:15:45.514 rmmod nvme_keyring 00:15:45.514 02:17:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.514 02:17:43 -- nvmf/common.sh@123 -- # set -e 00:15:45.514 02:17:43 -- nvmf/common.sh@124 -- # return 0 00:15:45.514 02:17:43 -- nvmf/common.sh@477 -- # '[' -n 85331 ']' 00:15:45.514 02:17:43 -- nvmf/common.sh@478 -- # killprocess 85331 00:15:45.514 02:17:43 -- common/autotest_common.sh@926 -- # '[' -z 85331 ']' 00:15:45.514 02:17:43 -- common/autotest_common.sh@930 -- # kill -0 85331 00:15:45.514 02:17:43 -- common/autotest_common.sh@931 -- # uname 00:15:45.514 02:17:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.514 02:17:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85331 00:15:45.514 02:17:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:45.514 02:17:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:45.514 killing process with pid 85331 00:15:45.514 02:17:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85331' 00:15:45.514 02:17:43 -- common/autotest_common.sh@945 -- # kill 85331 00:15:45.515 02:17:43 -- common/autotest_common.sh@950 -- # wait 85331 00:15:45.515 02:17:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.515 02:17:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.515 02:17:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.515 02:17:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.515 02:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.515 02:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.515 02:17:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:45.515 00:15:45.515 real 0m25.606s 00:15:45.515 user 0m39.663s 00:15:45.515 sys 0m8.080s 00:15:45.515 02:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.515 02:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 ************************************ 00:15:45.515 END TEST nvmf_zcopy 00:15:45.515 ************************************ 00:15:45.515 02:17:44 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.515 02:17:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:45.515 02:17:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.515 02:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 ************************************ 00:15:45.515 START TEST nvmf_nmic 00:15:45.515 ************************************ 00:15:45.515 02:17:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:45.515 * Looking for test storage... 00:15:45.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.515 02:17:44 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.515 02:17:44 -- nvmf/common.sh@7 -- # uname -s 00:15:45.515 02:17:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.515 02:17:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.515 02:17:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.515 02:17:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.515 02:17:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.515 02:17:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.515 02:17:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.515 02:17:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.515 02:17:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.515 02:17:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:45.515 02:17:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:45.515 02:17:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.515 02:17:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.515 02:17:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.515 02:17:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.515 02:17:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.515 02:17:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.515 02:17:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.515 02:17:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.515 02:17:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.515 02:17:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.515 02:17:44 -- paths/export.sh@5 -- # export PATH 00:15:45.515 02:17:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.515 02:17:44 -- nvmf/common.sh@46 -- # : 0 00:15:45.515 02:17:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:45.515 02:17:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:45.515 02:17:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:45.515 02:17:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.515 02:17:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.515 02:17:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:45.515 02:17:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:45.515 02:17:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:45.515 02:17:44 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.515 02:17:44 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.515 02:17:44 -- target/nmic.sh@14 -- # nvmftestinit 00:15:45.515 02:17:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:45.515 02:17:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.515 02:17:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:45.515 02:17:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:45.515 02:17:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:45.515 02:17:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.515 02:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.515 02:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.515 02:17:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:45.515 02:17:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:45.515 02:17:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.515 02:17:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.515 02:17:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.515 02:17:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:45.515 02:17:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.515 02:17:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.515 02:17:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.515 02:17:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.515 02:17:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.515 02:17:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.515 02:17:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.515 02:17:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.515 02:17:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:45.515 02:17:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:45.515 Cannot find device "nvmf_tgt_br" 00:15:45.515 02:17:44 -- nvmf/common.sh@154 -- # true 00:15:45.515 02:17:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.515 Cannot find device "nvmf_tgt_br2" 00:15:45.515 02:17:44 -- nvmf/common.sh@155 -- # true 00:15:45.516 02:17:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:45.516 02:17:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:45.516 Cannot find device "nvmf_tgt_br" 00:15:45.516 02:17:44 -- nvmf/common.sh@157 -- # true 00:15:45.516 02:17:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:45.516 Cannot find device "nvmf_tgt_br2" 00:15:45.516 02:17:44 -- nvmf/common.sh@158 -- # true 00:15:45.516 02:17:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:45.516 02:17:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:45.516 02:17:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.516 02:17:44 -- nvmf/common.sh@161 -- # true 00:15:45.516 02:17:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.516 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.516 02:17:44 -- nvmf/common.sh@162 -- # true 00:15:45.516 02:17:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.516 02:17:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.516 02:17:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.516 02:17:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.516 02:17:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.516 02:17:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.516 02:17:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.516 02:17:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:45.516 02:17:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:45.516 02:17:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:45.516 02:17:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:45.516 02:17:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:45.516 02:17:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:45.516 02:17:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:45.516 02:17:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:45.516 02:17:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:45.516 02:17:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:45.516 02:17:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:45.516 02:17:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:45.516 02:17:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:45.516 02:17:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:45.516 02:17:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:45.516 02:17:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:45.516 02:17:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:45.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:45.516 00:15:45.516 --- 10.0.0.2 ping statistics --- 00:15:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.516 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:45.516 02:17:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:45.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:45.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:45.516 00:15:45.516 --- 10.0.0.3 ping statistics --- 00:15:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.516 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:45.516 02:17:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:45.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:45.516 00:15:45.516 --- 10.0.0.1 ping statistics --- 00:15:45.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.516 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:45.516 02:17:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.516 02:17:44 -- nvmf/common.sh@421 -- # return 0 00:15:45.516 02:17:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.516 02:17:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.516 02:17:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.516 02:17:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.516 02:17:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.516 02:17:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.516 02:17:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.516 02:17:44 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:45.516 02:17:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.516 02:17:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:45.516 02:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:45.516 02:17:44 -- nvmf/common.sh@469 -- # nvmfpid=85828 00:15:45.516 02:17:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:45.516 02:17:44 -- nvmf/common.sh@470 -- # waitforlisten 85828 00:15:45.516 02:17:44 -- common/autotest_common.sh@819 -- # '[' -z 85828 ']' 00:15:45.516 02:17:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.516 02:17:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.516 02:17:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.516 02:17:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.516 02:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:45.516 [2024-07-15 02:17:44.861858] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:45.516 [2024-07-15 02:17:44.861953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.516 [2024-07-15 02:17:45.005202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.774 [2024-07-15 02:17:45.093591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.774 [2024-07-15 02:17:45.093810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.774 [2024-07-15 02:17:45.093830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.774 [2024-07-15 02:17:45.093842] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.774 [2024-07-15 02:17:45.094302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.774 [2024-07-15 02:17:45.094456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.774 [2024-07-15 02:17:45.095283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.774 [2024-07-15 02:17:45.095340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.341 02:17:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.341 02:17:45 -- common/autotest_common.sh@852 -- # return 0 00:15:46.341 02:17:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.341 02:17:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:46.341 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 02:17:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.599 02:17:45 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 [2024-07-15 02:17:45.910046] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.599 02:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:45 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 Malloc0 00:15:46.599 02:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:45 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 02:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:45 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 02:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:45 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 [2024-07-15 02:17:45.991562] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.599 02:17:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 test case1: single bdev can't be used in multiple subsystems 00:15:46.599 02:17:45 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:46.599 02:17:45 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:46.599 02:17:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:45 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 02:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:46 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:46.599 02:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:46 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 02:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:46 -- target/nmic.sh@28 -- # nmic_status=0 00:15:46.599 02:17:46 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:46.599 02:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:46 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 [2024-07-15 02:17:46.015438] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:46.599 [2024-07-15 02:17:46.015480] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:46.599 [2024-07-15 02:17:46.015502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.599 2024/07/15 02:17:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.599 request: 00:15:46.599 { 00:15:46.599 "method": "nvmf_subsystem_add_ns", 00:15:46.599 "params": { 00:15:46.599 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:46.599 "namespace": { 00:15:46.599 "bdev_name": "Malloc0" 00:15:46.599 } 00:15:46.599 } 00:15:46.599 } 00:15:46.599 Got JSON-RPC error response 00:15:46.599 GoRPCClient: error on JSON-RPC call 00:15:46.599 02:17:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:46.599 02:17:46 -- target/nmic.sh@29 -- # nmic_status=1 00:15:46.599 02:17:46 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:46.599 02:17:46 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:46.599 Adding namespace failed - expected result. 00:15:46.599 test case2: host connect to nvmf target in multiple paths 00:15:46.599 02:17:46 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:46.599 02:17:46 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:46.599 02:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.599 02:17:46 -- common/autotest_common.sh@10 -- # set +x 00:15:46.599 [2024-07-15 02:17:46.027538] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:46.599 02:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.599 02:17:46 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.857 02:17:46 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:46.857 02:17:46 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.857 02:17:46 -- common/autotest_common.sh@1177 -- # local i=0 00:15:46.857 02:17:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.857 02:17:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:46.857 02:17:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:48.862 02:17:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:48.862 02:17:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:48.862 02:17:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.862 02:17:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:48.862 02:17:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.862 02:17:48 -- common/autotest_common.sh@1187 -- # return 0 00:15:48.862 02:17:48 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:48.862 [global] 00:15:48.862 thread=1 00:15:48.862 invalidate=1 00:15:48.862 rw=write 00:15:48.862 time_based=1 00:15:48.862 runtime=1 00:15:48.862 ioengine=libaio 00:15:48.862 direct=1 00:15:48.862 bs=4096 00:15:48.862 iodepth=1 00:15:48.862 norandommap=0 00:15:48.862 numjobs=1 00:15:48.862 00:15:48.862 verify_dump=1 00:15:48.862 verify_backlog=512 00:15:48.862 verify_state_save=0 00:15:48.862 do_verify=1 00:15:48.862 verify=crc32c-intel 00:15:48.862 [job0] 00:15:48.862 filename=/dev/nvme0n1 00:15:49.120 Could not set queue depth (nvme0n1) 00:15:49.120 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:49.120 fio-3.35 00:15:49.120 Starting 1 thread 00:15:50.494 00:15:50.494 job0: (groupid=0, jobs=1): err= 0: pid=85938: Mon Jul 15 02:17:49 2024 00:15:50.494 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1001msec) 00:15:50.494 slat (nsec): min=13205, max=47818, avg=15841.26, stdev=3133.69 00:15:50.494 clat (usec): min=139, max=3091, avg=166.49, stdev=76.62 00:15:50.494 lat (usec): min=153, max=3109, avg=182.33, stdev=76.94 00:15:50.494 clat percentiles (usec): 00:15:50.495 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:15:50.495 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:15:50.495 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:15:50.495 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 297], 99.95th=[ 2737], 00:15:50.495 | 99.99th=[ 3097] 00:15:50.495 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:15:50.495 slat (usec): min=18, max=116, avg=23.49, stdev= 5.51 00:15:50.495 clat (usec): min=96, max=7837, avg=119.57, stdev=143.25 00:15:50.495 lat (usec): min=117, max=7858, avg=143.06, stdev=143.43 00:15:50.495 clat percentiles (usec): 00:15:50.495 | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 108], 00:15:50.495 | 30.00th=[ 110], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:15:50.495 | 70.00th=[ 119], 80.00th=[ 125], 90.00th=[ 133], 95.00th=[ 143], 00:15:50.495 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 241], 99.95th=[ 1614], 00:15:50.495 | 99.99th=[ 7832] 00:15:50.495 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:15:50.495 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:50.495 lat (usec) : 100=0.31%, 250=99.52%, 500=0.07%, 1000=0.02% 00:15:50.495 lat (msec) : 2=0.03%, 4=0.03%, 10=0.02% 00:15:50.495 cpu : usr=2.30%, sys=9.00%, ctx=6082, majf=0, minf=2 00:15:50.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.495 issued rwts: total=3010,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.495 00:15:50.495 Run status group 0 (all jobs): 00:15:50.495 READ: bw=11.7MiB/s (12.3MB/s), 11.7MiB/s-11.7MiB/s (12.3MB/s-12.3MB/s), io=11.8MiB (12.3MB), run=1001-1001msec 00:15:50.495 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:15:50.495 00:15:50.495 Disk stats (read/write): 00:15:50.495 nvme0n1: ios=2609/2918, merge=0/0, ticks=462/382, in_queue=844, util=90.37% 00:15:50.495 02:17:49 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:50.495 02:17:49 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.495 02:17:49 -- common/autotest_common.sh@1198 -- # local i=0 00:15:50.495 02:17:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:50.495 02:17:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.495 02:17:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:50.495 02:17:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.495 02:17:49 -- common/autotest_common.sh@1210 -- # return 0 00:15:50.495 02:17:49 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:50.495 02:17:49 -- target/nmic.sh@53 -- # nvmftestfini 00:15:50.495 02:17:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.495 02:17:49 -- nvmf/common.sh@116 -- # sync 00:15:50.495 02:17:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:50.495 02:17:49 -- nvmf/common.sh@119 -- # set +e 00:15:50.495 02:17:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.495 02:17:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:50.495 rmmod nvme_tcp 00:15:50.495 rmmod nvme_fabrics 00:15:50.495 rmmod nvme_keyring 00:15:50.495 02:17:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.495 02:17:49 -- nvmf/common.sh@123 -- # set -e 00:15:50.495 02:17:49 -- nvmf/common.sh@124 -- # return 0 00:15:50.495 02:17:49 -- nvmf/common.sh@477 -- # '[' -n 85828 ']' 00:15:50.495 02:17:49 -- nvmf/common.sh@478 -- # killprocess 85828 00:15:50.495 02:17:49 -- common/autotest_common.sh@926 -- # '[' -z 85828 ']' 00:15:50.495 02:17:49 -- common/autotest_common.sh@930 -- # kill -0 85828 00:15:50.495 02:17:49 -- common/autotest_common.sh@931 -- # uname 00:15:50.495 02:17:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.495 02:17:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85828 00:15:50.495 02:17:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:50.495 02:17:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:50.495 02:17:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85828' 00:15:50.495 killing process with pid 85828 00:15:50.495 02:17:49 -- common/autotest_common.sh@945 -- # kill 85828 00:15:50.495 02:17:49 -- common/autotest_common.sh@950 -- # wait 85828 00:15:50.754 02:17:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:50.754 02:17:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:50.754 02:17:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:50.754 02:17:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.754 02:17:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:50.754 02:17:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.754 02:17:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.754 02:17:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.754 02:17:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:50.754 00:15:50.754 real 0m5.891s 00:15:50.754 user 0m19.924s 00:15:50.754 sys 0m1.403s 00:15:50.754 02:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.754 02:17:50 -- common/autotest_common.sh@10 -- # set +x 00:15:50.754 ************************************ 00:15:50.754 END TEST nvmf_nmic 00:15:50.754 ************************************ 00:15:50.754 02:17:50 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:50.754 02:17:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:50.754 02:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.754 02:17:50 -- common/autotest_common.sh@10 -- # set +x 00:15:50.754 ************************************ 00:15:50.754 START TEST nvmf_fio_target 00:15:50.754 ************************************ 00:15:50.754 02:17:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:51.013 * Looking for test storage... 00:15:51.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.013 02:17:50 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.013 02:17:50 -- nvmf/common.sh@7 -- # uname -s 00:15:51.013 02:17:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.013 02:17:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.013 02:17:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.013 02:17:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.013 02:17:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.013 02:17:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.013 02:17:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.013 02:17:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.013 02:17:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.013 02:17:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:51.013 02:17:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:15:51.013 02:17:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.013 02:17:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.013 02:17:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.013 02:17:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.013 02:17:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.013 02:17:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.013 02:17:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.013 02:17:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.013 02:17:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.013 02:17:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.013 02:17:50 -- paths/export.sh@5 -- # export PATH 00:15:51.013 02:17:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.013 02:17:50 -- nvmf/common.sh@46 -- # : 0 00:15:51.013 02:17:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.013 02:17:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.013 02:17:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.013 02:17:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.013 02:17:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.013 02:17:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.013 02:17:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.013 02:17:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.013 02:17:50 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.013 02:17:50 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.013 02:17:50 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.013 02:17:50 -- target/fio.sh@16 -- # nvmftestinit 00:15:51.013 02:17:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.013 02:17:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.013 02:17:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.013 02:17:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.013 02:17:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.013 02:17:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.013 02:17:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.013 02:17:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.013 02:17:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:51.013 02:17:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:51.013 02:17:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.013 02:17:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.013 02:17:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.013 02:17:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:51.013 02:17:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.013 02:17:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.013 02:17:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.013 02:17:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.013 02:17:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.013 02:17:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.013 02:17:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.013 02:17:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.013 02:17:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:51.013 02:17:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:51.013 Cannot find device "nvmf_tgt_br" 00:15:51.013 02:17:50 -- nvmf/common.sh@154 -- # true 00:15:51.013 02:17:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.013 Cannot find device "nvmf_tgt_br2" 00:15:51.013 02:17:50 -- nvmf/common.sh@155 -- # true 00:15:51.013 02:17:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:51.013 02:17:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:51.013 Cannot find device "nvmf_tgt_br" 00:15:51.013 02:17:50 -- nvmf/common.sh@157 -- # true 00:15:51.013 02:17:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:51.013 Cannot find device "nvmf_tgt_br2" 00:15:51.013 02:17:50 -- nvmf/common.sh@158 -- # true 00:15:51.013 02:17:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:51.013 02:17:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:51.013 02:17:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.013 02:17:50 -- nvmf/common.sh@161 -- # true 00:15:51.013 02:17:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.014 02:17:50 -- nvmf/common.sh@162 -- # true 00:15:51.014 02:17:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.014 02:17:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.014 02:17:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.014 02:17:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.014 02:17:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.272 02:17:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.272 02:17:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.272 02:17:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.272 02:17:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.272 02:17:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:51.272 02:17:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:51.272 02:17:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:51.272 02:17:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:51.272 02:17:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.272 02:17:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.272 02:17:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.272 02:17:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:51.272 02:17:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:51.272 02:17:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.272 02:17:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.272 02:17:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.272 02:17:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.272 02:17:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.272 02:17:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:51.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:51.272 00:15:51.272 --- 10.0.0.2 ping statistics --- 00:15:51.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.273 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:51.273 02:17:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:51.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:15:51.273 00:15:51.273 --- 10.0.0.3 ping statistics --- 00:15:51.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.273 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:51.273 02:17:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:51.273 00:15:51.273 --- 10.0.0.1 ping statistics --- 00:15:51.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.273 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:51.273 02:17:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.273 02:17:50 -- nvmf/common.sh@421 -- # return 0 00:15:51.273 02:17:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:51.273 02:17:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.273 02:17:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:51.273 02:17:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:51.273 02:17:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.273 02:17:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:51.273 02:17:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:51.273 02:17:50 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:51.273 02:17:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:51.273 02:17:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:51.273 02:17:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.273 02:17:50 -- nvmf/common.sh@469 -- # nvmfpid=86115 00:15:51.273 02:17:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:51.273 02:17:50 -- nvmf/common.sh@470 -- # waitforlisten 86115 00:15:51.273 02:17:50 -- common/autotest_common.sh@819 -- # '[' -z 86115 ']' 00:15:51.273 02:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.273 02:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:51.273 02:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.273 02:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:51.273 02:17:50 -- common/autotest_common.sh@10 -- # set +x 00:15:51.273 [2024-07-15 02:17:50.771751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:51.273 [2024-07-15 02:17:50.771841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.531 [2024-07-15 02:17:50.911501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.531 [2024-07-15 02:17:50.987750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.531 [2024-07-15 02:17:50.987909] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.531 [2024-07-15 02:17:50.987940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.531 [2024-07-15 02:17:50.987950] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.531 [2024-07-15 02:17:50.988078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.531 [2024-07-15 02:17:50.988665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.531 [2024-07-15 02:17:50.988887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.531 [2024-07-15 02:17:50.989272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.467 02:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:52.467 02:17:51 -- common/autotest_common.sh@852 -- # return 0 00:15:52.467 02:17:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:52.467 02:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:52.467 02:17:51 -- common/autotest_common.sh@10 -- # set +x 00:15:52.467 02:17:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.467 02:17:51 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.467 [2024-07-15 02:17:52.021421] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.725 02:17:52 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.983 02:17:52 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:52.983 02:17:52 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.242 02:17:52 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:53.242 02:17:52 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.500 02:17:52 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:53.500 02:17:52 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:53.759 02:17:53 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:53.759 02:17:53 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:54.016 02:17:53 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.274 02:17:53 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:54.274 02:17:53 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.531 02:17:53 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:54.531 02:17:53 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.787 02:17:54 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:54.787 02:17:54 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:55.045 02:17:54 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:55.302 02:17:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:55.302 02:17:54 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.565 02:17:54 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:55.565 02:17:54 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:55.822 02:17:55 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.822 [2024-07-15 02:17:55.377822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.080 02:17:55 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:56.080 02:17:55 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:56.337 02:17:55 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.594 02:17:56 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:56.594 02:17:56 -- common/autotest_common.sh@1177 -- # local i=0 00:15:56.594 02:17:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.594 02:17:56 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:56.594 02:17:56 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:56.594 02:17:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:58.495 02:17:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:58.753 02:17:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.753 02:17:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:58.753 02:17:58 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:15:58.753 02:17:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.753 02:17:58 -- common/autotest_common.sh@1187 -- # return 0 00:15:58.753 02:17:58 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:58.753 [global] 00:15:58.753 thread=1 00:15:58.753 invalidate=1 00:15:58.753 rw=write 00:15:58.753 time_based=1 00:15:58.753 runtime=1 00:15:58.753 ioengine=libaio 00:15:58.753 direct=1 00:15:58.753 bs=4096 00:15:58.753 iodepth=1 00:15:58.753 norandommap=0 00:15:58.753 numjobs=1 00:15:58.753 00:15:58.753 verify_dump=1 00:15:58.753 verify_backlog=512 00:15:58.753 verify_state_save=0 00:15:58.753 do_verify=1 00:15:58.753 verify=crc32c-intel 00:15:58.753 [job0] 00:15:58.753 filename=/dev/nvme0n1 00:15:58.753 [job1] 00:15:58.753 filename=/dev/nvme0n2 00:15:58.753 [job2] 00:15:58.753 filename=/dev/nvme0n3 00:15:58.753 [job3] 00:15:58.753 filename=/dev/nvme0n4 00:15:58.753 Could not set queue depth (nvme0n1) 00:15:58.753 Could not set queue depth (nvme0n2) 00:15:58.753 Could not set queue depth (nvme0n3) 00:15:58.753 Could not set queue depth (nvme0n4) 00:15:58.753 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.753 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.753 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.753 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.753 fio-3.35 00:15:58.753 Starting 4 threads 00:16:00.130 00:16:00.130 job0: (groupid=0, jobs=1): err= 0: pid=86409: Mon Jul 15 02:17:59 2024 00:16:00.130 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:00.130 slat (nsec): min=12977, max=37444, avg=14937.31, stdev=1920.60 00:16:00.130 clat (usec): min=123, max=206, avg=149.05, stdev=10.21 00:16:00.130 lat (usec): min=136, max=221, avg=163.99, stdev=10.40 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:16:00.130 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:16:00.130 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:16:00.130 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 206], 00:16:00.130 | 99.99th=[ 206] 00:16:00.130 write: IOPS=3407, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec); 0 zone resets 00:16:00.130 slat (usec): min=18, max=111, avg=21.85, stdev= 4.04 00:16:00.130 clat (usec): min=91, max=645, avg=120.18, stdev=16.79 00:16:00.130 lat (usec): min=111, max=675, avg=142.03, stdev=17.66 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 98], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:16:00.130 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:16:00.130 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 139], 00:16:00.130 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 562], 00:16:00.130 | 99.99th=[ 644] 00:16:00.130 bw ( KiB/s): min=13432, max=13432, per=27.05%, avg=13432.00, stdev= 0.00, samples=1 00:16:00.130 iops : min= 3358, max= 3358, avg=3358.00, stdev= 0.00, samples=1 00:16:00.130 lat (usec) : 100=0.97%, 250=98.98%, 500=0.02%, 750=0.03% 00:16:00.130 cpu : usr=3.10%, sys=8.30%, ctx=6483, majf=0, minf=4 00:16:00.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 issued rwts: total=3072,3411,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.130 job1: (groupid=0, jobs=1): err= 0: pid=86410: Mon Jul 15 02:17:59 2024 00:16:00.130 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:00.130 slat (nsec): min=13809, max=45330, avg=16258.98, stdev=2396.64 00:16:00.130 clat (usec): min=121, max=1214, avg=150.38, stdev=24.08 00:16:00.130 lat (usec): min=140, max=1228, avg=166.64, stdev=24.22 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:16:00.130 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:16:00.130 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:16:00.130 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 359], 99.95th=[ 529], 00:16:00.130 | 99.99th=[ 1221] 00:16:00.130 write: IOPS=3254, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:16:00.130 slat (nsec): min=17155, max=57404, avg=23635.76, stdev=3736.19 00:16:00.130 clat (usec): min=95, max=1949, avg=122.34, stdev=34.78 00:16:00.130 lat (usec): min=117, max=1976, avg=145.98, stdev=35.23 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 114], 00:16:00.130 | 30.00th=[ 117], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 124], 00:16:00.130 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 141], 00:16:00.130 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 302], 99.95th=[ 343], 00:16:00.130 | 99.99th=[ 1958] 00:16:00.130 bw ( KiB/s): min=12672, max=12672, per=25.52%, avg=12672.00, stdev= 0.00, samples=1 00:16:00.130 iops : min= 3168, max= 3168, avg=3168.00, stdev= 0.00, samples=1 00:16:00.130 lat (usec) : 100=0.70%, 250=99.07%, 500=0.19%, 750=0.02% 00:16:00.130 lat (msec) : 2=0.03% 00:16:00.130 cpu : usr=1.80%, sys=10.20%, ctx=6330, majf=0, minf=17 00:16:00.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 issued rwts: total=3072,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.130 job2: (groupid=0, jobs=1): err= 0: pid=86411: Mon Jul 15 02:17:59 2024 00:16:00.130 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:16:00.130 slat (nsec): min=13992, max=54436, avg=17280.90, stdev=5329.09 00:16:00.130 clat (usec): min=138, max=670, avg=164.37, stdev=17.30 00:16:00.130 lat (usec): min=152, max=686, avg=181.65, stdev=18.61 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:16:00.130 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:16:00.130 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:16:00.130 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 233], 99.95th=[ 594], 00:16:00.130 | 99.99th=[ 668] 00:16:00.130 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:00.130 slat (nsec): min=19294, max=77251, avg=25812.86, stdev=7947.58 00:16:00.130 clat (usec): min=102, max=523, avg=132.10, stdev=14.25 00:16:00.130 lat (usec): min=123, max=549, avg=157.92, stdev=17.56 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:16:00.130 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:16:00.130 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:16:00.130 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 204], 99.95th=[ 310], 00:16:00.130 | 99.99th=[ 523] 00:16:00.130 bw ( KiB/s): min=12288, max=12288, per=24.75%, avg=12288.00, stdev= 0.00, samples=1 00:16:00.130 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:00.130 lat (usec) : 250=99.91%, 500=0.03%, 750=0.05% 00:16:00.130 cpu : usr=2.40%, sys=9.50%, ctx=5844, majf=0, minf=5 00:16:00.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 issued rwts: total=2771,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.130 job3: (groupid=0, jobs=1): err= 0: pid=86412: Mon Jul 15 02:17:59 2024 00:16:00.130 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:00.130 slat (nsec): min=12973, max=46653, avg=15035.52, stdev=2397.63 00:16:00.130 clat (usec): min=164, max=514, avg=193.86, stdev=13.77 00:16:00.130 lat (usec): min=179, max=531, avg=208.90, stdev=13.94 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:16:00.130 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:16:00.130 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:16:00.130 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 247], 99.95th=[ 251], 00:16:00.130 | 99.99th=[ 515] 00:16:00.130 write: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:16:00.130 slat (nsec): min=18535, max=84117, avg=21825.67, stdev=3815.43 00:16:00.130 clat (usec): min=119, max=223, avg=147.77, stdev=11.53 00:16:00.130 lat (usec): min=138, max=307, avg=169.59, stdev=12.68 00:16:00.130 clat percentiles (usec): 00:16:00.130 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:16:00.130 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:16:00.130 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 169], 00:16:00.130 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 210], 00:16:00.130 | 99.99th=[ 225] 00:16:00.130 bw ( KiB/s): min=12288, max=12288, per=24.75%, avg=12288.00, stdev= 0.00, samples=1 00:16:00.130 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:00.130 lat (usec) : 250=99.96%, 500=0.02%, 750=0.02% 00:16:00.130 cpu : usr=2.00%, sys=7.10%, ctx=5246, majf=0, minf=9 00:16:00.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.130 issued rwts: total=2560,2686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.130 00:16:00.130 Run status group 0 (all jobs): 00:16:00.130 READ: bw=44.8MiB/s (47.0MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.8MiB (47.0MB), run=1001-1001msec 00:16:00.130 WRITE: bw=48.5MiB/s (50.8MB/s), 10.5MiB/s-13.3MiB/s (11.0MB/s-14.0MB/s), io=48.5MiB (50.9MB), run=1001-1001msec 00:16:00.130 00:16:00.130 Disk stats (read/write): 00:16:00.130 nvme0n1: ios=2610/3043, merge=0/0, ticks=413/387, in_queue=800, util=87.88% 00:16:00.130 nvme0n2: ios=2601/2910, merge=0/0, ticks=414/389, in_queue=803, util=88.25% 00:16:00.130 nvme0n3: ios=2445/2560, merge=0/0, ticks=410/362, in_queue=772, util=89.15% 00:16:00.130 nvme0n4: ios=2048/2511, merge=0/0, ticks=402/397, in_queue=799, util=89.71% 00:16:00.130 02:17:59 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:00.130 [global] 00:16:00.130 thread=1 00:16:00.130 invalidate=1 00:16:00.130 rw=randwrite 00:16:00.130 time_based=1 00:16:00.130 runtime=1 00:16:00.130 ioengine=libaio 00:16:00.130 direct=1 00:16:00.130 bs=4096 00:16:00.130 iodepth=1 00:16:00.130 norandommap=0 00:16:00.130 numjobs=1 00:16:00.130 00:16:00.130 verify_dump=1 00:16:00.130 verify_backlog=512 00:16:00.130 verify_state_save=0 00:16:00.130 do_verify=1 00:16:00.130 verify=crc32c-intel 00:16:00.130 [job0] 00:16:00.130 filename=/dev/nvme0n1 00:16:00.130 [job1] 00:16:00.130 filename=/dev/nvme0n2 00:16:00.130 [job2] 00:16:00.130 filename=/dev/nvme0n3 00:16:00.130 [job3] 00:16:00.130 filename=/dev/nvme0n4 00:16:00.130 Could not set queue depth (nvme0n1) 00:16:00.130 Could not set queue depth (nvme0n2) 00:16:00.130 Could not set queue depth (nvme0n3) 00:16:00.130 Could not set queue depth (nvme0n4) 00:16:00.130 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.130 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.130 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.130 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.130 fio-3.35 00:16:00.130 Starting 4 threads 00:16:01.507 00:16:01.507 job0: (groupid=0, jobs=1): err= 0: pid=86471: Mon Jul 15 02:18:00 2024 00:16:01.507 read: IOPS=2002, BW=8012KiB/s (8204kB/s)(8020KiB/1001msec) 00:16:01.507 slat (usec): min=13, max=118, avg=18.63, stdev= 6.46 00:16:01.507 clat (usec): min=135, max=901, avg=251.23, stdev=32.04 00:16:01.507 lat (usec): min=151, max=916, avg=269.86, stdev=31.91 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 157], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 239], 00:16:01.507 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:16:01.507 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 289], 00:16:01.507 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 453], 99.95th=[ 652], 00:16:01.507 | 99.99th=[ 906] 00:16:01.507 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:01.507 slat (usec): min=19, max=123, avg=29.12, stdev=10.80 00:16:01.507 clat (usec): min=99, max=348, avg=190.32, stdev=21.86 00:16:01.507 lat (usec): min=119, max=413, avg=219.44, stdev=22.58 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:16:01.507 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:16:01.507 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 221], 00:16:01.507 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 338], 00:16:01.507 | 99.99th=[ 351] 00:16:01.507 bw ( KiB/s): min= 8192, max= 8192, per=26.66%, avg=8192.00, stdev= 0.00, samples=1 00:16:01.507 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:01.507 lat (usec) : 100=0.05%, 250=75.70%, 500=24.20%, 750=0.02%, 1000=0.02% 00:16:01.507 cpu : usr=1.80%, sys=7.30%, ctx=4057, majf=0, minf=17 00:16:01.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 issued rwts: total=2005,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.507 job1: (groupid=0, jobs=1): err= 0: pid=86472: Mon Jul 15 02:18:00 2024 00:16:01.507 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:01.507 slat (nsec): min=11107, max=43409, avg=13691.91, stdev=3094.46 00:16:01.507 clat (usec): min=184, max=41188, avg=328.46, stdev=1043.83 00:16:01.507 lat (usec): min=197, max=41201, avg=342.15, stdev=1043.80 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 210], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 289], 00:16:01.507 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:16:01.507 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:16:01.507 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 1336], 99.95th=[41157], 00:16:01.507 | 99.99th=[41157] 00:16:01.507 write: IOPS=1795, BW=7181KiB/s (7353kB/s)(7188KiB/1001msec); 0 zone resets 00:16:01.507 slat (usec): min=11, max=108, avg=21.73, stdev= 4.83 00:16:01.507 clat (usec): min=132, max=832, avg=238.89, stdev=21.58 00:16:01.507 lat (usec): min=153, max=852, avg=260.61, stdev=21.70 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:16:01.507 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 237], 60.00th=[ 241], 00:16:01.507 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:16:01.507 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 478], 99.95th=[ 832], 00:16:01.507 | 99.99th=[ 832] 00:16:01.507 bw ( KiB/s): min= 8192, max= 8192, per=26.66%, avg=8192.00, stdev= 0.00, samples=1 00:16:01.507 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:01.507 lat (usec) : 250=43.95%, 500=55.96%, 1000=0.03% 00:16:01.507 lat (msec) : 2=0.03%, 50=0.03% 00:16:01.507 cpu : usr=1.20%, sys=4.70%, ctx=3333, majf=0, minf=12 00:16:01.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 issued rwts: total=1536,1797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.507 job2: (groupid=0, jobs=1): err= 0: pid=86473: Mon Jul 15 02:18:00 2024 00:16:01.507 read: IOPS=1974, BW=7896KiB/s (8086kB/s)(7904KiB/1001msec) 00:16:01.507 slat (nsec): min=13115, max=45104, avg=16031.62, stdev=2666.55 00:16:01.507 clat (usec): min=145, max=1273, avg=258.07, stdev=49.60 00:16:01.507 lat (usec): min=163, max=1295, avg=274.10, stdev=49.94 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:16:01.507 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:16:01.507 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:16:01.507 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 1254], 99.95th=[ 1270], 00:16:01.507 | 99.99th=[ 1270] 00:16:01.507 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:01.507 slat (usec): min=18, max=150, avg=25.02, stdev= 7.70 00:16:01.507 clat (usec): min=106, max=360, avg=194.89, stdev=22.82 00:16:01.507 lat (usec): min=136, max=407, avg=219.91, stdev=22.52 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 153], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 182], 00:16:01.507 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:16:01.507 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 237], 00:16:01.507 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 330], 99.95th=[ 347], 00:16:01.507 | 99.99th=[ 363] 00:16:01.507 bw ( KiB/s): min= 8192, max= 8192, per=26.66%, avg=8192.00, stdev= 0.00, samples=1 00:16:01.507 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:01.507 lat (usec) : 250=71.17%, 500=28.65%, 750=0.05%, 1000=0.05% 00:16:01.507 lat (msec) : 2=0.07% 00:16:01.507 cpu : usr=1.20%, sys=6.60%, ctx=4032, majf=0, minf=5 00:16:01.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 issued rwts: total=1976,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.507 job3: (groupid=0, jobs=1): err= 0: pid=86474: Mon Jul 15 02:18:00 2024 00:16:01.507 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:01.507 slat (nsec): min=8920, max=34434, avg=13457.59, stdev=2440.42 00:16:01.507 clat (usec): min=201, max=41172, avg=328.71, stdev=1043.39 00:16:01.507 lat (usec): min=213, max=41187, avg=342.17, stdev=1043.44 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 289], 00:16:01.507 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:16:01.507 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:16:01.507 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 1385], 99.95th=[41157], 00:16:01.507 | 99.99th=[41157] 00:16:01.507 write: IOPS=1796, BW=7185KiB/s (7357kB/s)(7192KiB/1001msec); 0 zone resets 00:16:01.507 slat (nsec): min=16464, max=72038, avg=21812.73, stdev=4412.87 00:16:01.507 clat (usec): min=116, max=832, avg=238.71, stdev=23.27 00:16:01.507 lat (usec): min=146, max=853, avg=260.52, stdev=23.02 00:16:01.507 clat percentiles (usec): 00:16:01.507 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:16:01.507 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 237], 60.00th=[ 241], 00:16:01.507 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 265], 00:16:01.507 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 586], 99.95th=[ 832], 00:16:01.507 | 99.99th=[ 832] 00:16:01.507 bw ( KiB/s): min= 8192, max= 8192, per=26.66%, avg=8192.00, stdev= 0.00, samples=1 00:16:01.507 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:01.507 lat (usec) : 250=43.88%, 500=56.00%, 750=0.03%, 1000=0.03% 00:16:01.507 lat (msec) : 2=0.03%, 50=0.03% 00:16:01.507 cpu : usr=1.60%, sys=4.20%, ctx=3334, majf=0, minf=11 00:16:01.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.507 issued rwts: total=1536,1798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.507 00:16:01.507 Run status group 0 (all jobs): 00:16:01.507 READ: bw=27.5MiB/s (28.9MB/s), 6138KiB/s-8012KiB/s (6285kB/s-8204kB/s), io=27.6MiB (28.9MB), run=1001-1001msec 00:16:01.507 WRITE: bw=30.0MiB/s (31.5MB/s), 7181KiB/s-8184KiB/s (7353kB/s-8380kB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:16:01.507 00:16:01.507 Disk stats (read/write): 00:16:01.507 nvme0n1: ios=1586/2013, merge=0/0, ticks=416/406, in_queue=822, util=87.98% 00:16:01.507 nvme0n2: ios=1360/1536, merge=0/0, ticks=454/378, in_queue=832, util=88.17% 00:16:01.507 nvme0n3: ios=1542/2007, merge=0/0, ticks=414/412, in_queue=826, util=89.48% 00:16:01.507 nvme0n4: ios=1324/1536, merge=0/0, ticks=438/380, in_queue=818, util=89.74% 00:16:01.507 02:18:00 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:01.507 [global] 00:16:01.507 thread=1 00:16:01.507 invalidate=1 00:16:01.507 rw=write 00:16:01.508 time_based=1 00:16:01.508 runtime=1 00:16:01.508 ioengine=libaio 00:16:01.508 direct=1 00:16:01.508 bs=4096 00:16:01.508 iodepth=128 00:16:01.508 norandommap=0 00:16:01.508 numjobs=1 00:16:01.508 00:16:01.508 verify_dump=1 00:16:01.508 verify_backlog=512 00:16:01.508 verify_state_save=0 00:16:01.508 do_verify=1 00:16:01.508 verify=crc32c-intel 00:16:01.508 [job0] 00:16:01.508 filename=/dev/nvme0n1 00:16:01.508 [job1] 00:16:01.508 filename=/dev/nvme0n2 00:16:01.508 [job2] 00:16:01.508 filename=/dev/nvme0n3 00:16:01.508 [job3] 00:16:01.508 filename=/dev/nvme0n4 00:16:01.508 Could not set queue depth (nvme0n1) 00:16:01.508 Could not set queue depth (nvme0n2) 00:16:01.508 Could not set queue depth (nvme0n3) 00:16:01.508 Could not set queue depth (nvme0n4) 00:16:01.508 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.508 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.508 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.508 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:01.508 fio-3.35 00:16:01.508 Starting 4 threads 00:16:02.886 00:16:02.886 job0: (groupid=0, jobs=1): err= 0: pid=86528: Mon Jul 15 02:18:02 2024 00:16:02.886 read: IOPS=5158, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1001msec) 00:16:02.886 slat (usec): min=5, max=3130, avg=88.26, stdev=374.26 00:16:02.886 clat (usec): min=589, max=16253, avg=11503.39, stdev=1519.81 00:16:02.886 lat (usec): min=2577, max=16265, avg=11591.64, stdev=1492.51 00:16:02.886 clat percentiles (usec): 00:16:02.886 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10683], 00:16:02.886 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:16:02.886 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:16:02.886 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15401], 99.95th=[16188], 00:16:02.886 | 99.99th=[16319] 00:16:02.886 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:16:02.886 slat (usec): min=11, max=4384, avg=88.89, stdev=311.16 00:16:02.886 clat (usec): min=5279, max=17928, avg=11886.61, stdev=1743.09 00:16:02.886 lat (usec): min=5299, max=17972, avg=11975.50, stdev=1746.27 00:16:02.886 clat percentiles (usec): 00:16:02.886 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:16:02.886 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:16:02.886 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14222], 95.00th=[15008], 00:16:02.886 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:16:02.886 | 99.99th=[17957] 00:16:02.886 bw ( KiB/s): min=20480, max=20480, per=30.69%, avg=20480.00, stdev= 0.00, samples=1 00:16:02.886 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:16:02.886 lat (usec) : 750=0.01% 00:16:02.886 lat (msec) : 4=0.30%, 10=11.73%, 20=87.97% 00:16:02.886 cpu : usr=5.20%, sys=15.30%, ctx=927, majf=0, minf=11 00:16:02.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:02.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.886 issued rwts: total=5164,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.886 job1: (groupid=0, jobs=1): err= 0: pid=86529: Mon Jul 15 02:18:02 2024 00:16:02.886 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1004msec) 00:16:02.886 slat (usec): min=4, max=8994, avg=192.84, stdev=877.52 00:16:02.886 clat (usec): min=881, max=34156, avg=23558.27, stdev=4818.96 00:16:02.886 lat (usec): min=5610, max=34194, avg=23751.11, stdev=4805.05 00:16:02.886 clat percentiles (usec): 00:16:02.887 | 1.00th=[ 6587], 5.00th=[15139], 10.00th=[18482], 20.00th=[20055], 00:16:02.887 | 30.00th=[20579], 40.00th=[21890], 50.00th=[22676], 60.00th=[24511], 00:16:02.887 | 70.00th=[27657], 80.00th=[28443], 90.00th=[29492], 95.00th=[30802], 00:16:02.887 | 99.00th=[31589], 99.50th=[31851], 99.90th=[32900], 99.95th=[33424], 00:16:02.887 | 99.99th=[34341] 00:16:02.887 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:16:02.887 slat (usec): min=5, max=5646, avg=146.35, stdev=584.96 00:16:02.887 clat (usec): min=11167, max=31027, avg=20220.95, stdev=3160.67 00:16:02.887 lat (usec): min=11191, max=31063, avg=20367.30, stdev=3167.58 00:16:02.887 clat percentiles (usec): 00:16:02.887 | 1.00th=[12649], 5.00th=[15795], 10.00th=[16450], 20.00th=[17695], 00:16:02.887 | 30.00th=[18744], 40.00th=[19268], 50.00th=[19792], 60.00th=[20579], 00:16:02.887 | 70.00th=[21627], 80.00th=[22938], 90.00th=[24249], 95.00th=[25560], 00:16:02.887 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30278], 99.95th=[30278], 00:16:02.887 | 99.99th=[31065] 00:16:02.887 bw ( KiB/s): min=12288, max=12288, per=18.41%, avg=12288.00, stdev= 0.00, samples=2 00:16:02.887 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:02.887 lat (usec) : 1000=0.02% 00:16:02.887 lat (msec) : 10=0.55%, 20=37.64%, 50=61.79% 00:16:02.887 cpu : usr=2.99%, sys=8.47%, ctx=736, majf=0, minf=7 00:16:02.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:02.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.887 issued rwts: total=2741,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.887 job2: (groupid=0, jobs=1): err= 0: pid=86530: Mon Jul 15 02:18:02 2024 00:16:02.887 read: IOPS=4631, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1002msec) 00:16:02.887 slat (usec): min=9, max=3144, avg=98.15, stdev=429.56 00:16:02.887 clat (usec): min=325, max=16147, avg=12902.52, stdev=1285.82 00:16:02.887 lat (usec): min=3470, max=16160, avg=13000.67, stdev=1235.70 00:16:02.887 clat percentiles (usec): 00:16:02.887 | 1.00th=[10159], 5.00th=[10945], 10.00th=[11338], 20.00th=[11994], 00:16:02.887 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:16:02.887 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[14484], 00:16:02.887 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16057], 99.95th=[16188], 00:16:02.887 | 99.99th=[16188] 00:16:02.887 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:16:02.887 slat (usec): min=9, max=4897, avg=99.03, stdev=380.90 00:16:02.887 clat (usec): min=4087, max=16162, avg=13029.87, stdev=1318.25 00:16:02.887 lat (usec): min=4105, max=16194, avg=13128.91, stdev=1300.54 00:16:02.887 clat percentiles (usec): 00:16:02.887 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11600], 00:16:02.887 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:16:02.887 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14353], 95.00th=[14877], 00:16:02.887 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16188], 99.95th=[16188], 00:16:02.887 | 99.99th=[16188] 00:16:02.887 bw ( KiB/s): min=19720, max=20480, per=30.12%, avg=20100.00, stdev=537.40, samples=2 00:16:02.887 iops : min= 4930, max= 5120, avg=5025.00, stdev=134.35, samples=2 00:16:02.887 lat (usec) : 500=0.01% 00:16:02.887 lat (msec) : 4=0.27%, 10=0.52%, 20=99.20% 00:16:02.887 cpu : usr=5.09%, sys=13.17%, ctx=772, majf=0, minf=3 00:16:02.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:02.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.887 issued rwts: total=4641,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.887 job3: (groupid=0, jobs=1): err= 0: pid=86531: Mon Jul 15 02:18:02 2024 00:16:02.887 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:16:02.887 slat (usec): min=3, max=9213, avg=191.37, stdev=810.90 00:16:02.887 clat (usec): min=17428, max=32597, avg=25116.32, stdev=2973.02 00:16:02.887 lat (usec): min=18135, max=32615, avg=25307.69, stdev=2895.47 00:16:02.887 clat percentiles (usec): 00:16:02.887 | 1.00th=[18482], 5.00th=[20579], 10.00th=[21365], 20.00th=[22414], 00:16:02.887 | 30.00th=[23200], 40.00th=[23987], 50.00th=[25035], 60.00th=[26084], 00:16:02.887 | 70.00th=[27132], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:16:02.887 | 99.00th=[30540], 99.50th=[31065], 99.90th=[32637], 99.95th=[32637], 00:16:02.887 | 99.99th=[32637] 00:16:02.887 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1003msec); 0 zone resets 00:16:02.887 slat (usec): min=4, max=5930, avg=166.54, stdev=651.00 00:16:02.887 clat (usec): min=2010, max=29024, avg=21139.89, stdev=3349.55 00:16:02.887 lat (usec): min=4465, max=29317, avg=21306.43, stdev=3326.32 00:16:02.887 clat percentiles (usec): 00:16:02.887 | 1.00th=[ 7177], 5.00th=[16909], 10.00th=[17957], 20.00th=[19006], 00:16:02.887 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[21890], 00:16:02.887 | 70.00th=[22938], 80.00th=[23462], 90.00th=[25560], 95.00th=[26084], 00:16:02.887 | 99.00th=[26870], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:16:02.887 | 99.99th=[28967] 00:16:02.887 bw ( KiB/s): min=10096, max=12288, per=16.77%, avg=11192.00, stdev=1549.98, samples=2 00:16:02.887 iops : min= 2524, max= 3072, avg=2798.00, stdev=387.49, samples=2 00:16:02.887 lat (msec) : 4=0.02%, 10=0.73%, 20=19.80%, 50=79.45% 00:16:02.887 cpu : usr=2.99%, sys=8.28%, ctx=715, majf=0, minf=10 00:16:02.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:02.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.887 issued rwts: total=2560,2925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.887 00:16:02.887 Run status group 0 (all jobs): 00:16:02.887 READ: bw=58.8MiB/s (61.6MB/s), 9.97MiB/s-20.2MiB/s (10.5MB/s-21.1MB/s), io=59.0MiB (61.9MB), run=1001-1004msec 00:16:02.887 WRITE: bw=65.2MiB/s (68.3MB/s), 11.4MiB/s-22.0MiB/s (11.9MB/s-23.0MB/s), io=65.4MiB (68.6MB), run=1001-1004msec 00:16:02.887 00:16:02.887 Disk stats (read/write): 00:16:02.887 nvme0n1: ios=4658/4670, merge=0/0, ticks=12813/12751, in_queue=25564, util=88.98% 00:16:02.887 nvme0n2: ios=2517/2560, merge=0/0, ticks=16460/13495, in_queue=29955, util=89.10% 00:16:02.887 nvme0n3: ios=4113/4407, merge=0/0, ticks=12527/12617, in_queue=25144, util=89.78% 00:16:02.887 nvme0n4: ios=2210/2560, merge=0/0, ticks=13239/12706, in_queue=25945, util=89.83% 00:16:02.887 02:18:02 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:02.887 [global] 00:16:02.887 thread=1 00:16:02.887 invalidate=1 00:16:02.887 rw=randwrite 00:16:02.887 time_based=1 00:16:02.887 runtime=1 00:16:02.887 ioengine=libaio 00:16:02.887 direct=1 00:16:02.887 bs=4096 00:16:02.887 iodepth=128 00:16:02.887 norandommap=0 00:16:02.887 numjobs=1 00:16:02.887 00:16:02.887 verify_dump=1 00:16:02.887 verify_backlog=512 00:16:02.887 verify_state_save=0 00:16:02.887 do_verify=1 00:16:02.887 verify=crc32c-intel 00:16:02.887 [job0] 00:16:02.887 filename=/dev/nvme0n1 00:16:02.887 [job1] 00:16:02.887 filename=/dev/nvme0n2 00:16:02.887 [job2] 00:16:02.887 filename=/dev/nvme0n3 00:16:02.887 [job3] 00:16:02.887 filename=/dev/nvme0n4 00:16:02.887 Could not set queue depth (nvme0n1) 00:16:02.887 Could not set queue depth (nvme0n2) 00:16:02.887 Could not set queue depth (nvme0n3) 00:16:02.887 Could not set queue depth (nvme0n4) 00:16:02.887 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.887 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.887 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.887 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.887 fio-3.35 00:16:02.887 Starting 4 threads 00:16:04.272 00:16:04.272 job0: (groupid=0, jobs=1): err= 0: pid=86584: Mon Jul 15 02:18:03 2024 00:16:04.272 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:16:04.272 slat (usec): min=5, max=11161, avg=90.31, stdev=598.77 00:16:04.272 clat (usec): min=5334, max=24413, avg=12336.66, stdev=2430.57 00:16:04.272 lat (usec): min=5360, max=24431, avg=12426.97, stdev=2472.25 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10683], 00:16:04.272 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:16:04.272 | 70.00th=[13173], 80.00th=[14091], 90.00th=[14877], 95.00th=[16450], 00:16:04.272 | 99.00th=[21627], 99.50th=[23200], 99.90th=[23987], 99.95th=[24511], 00:16:04.272 | 99.99th=[24511] 00:16:04.272 write: IOPS=5446, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1007msec); 0 zone resets 00:16:04.272 slat (usec): min=4, max=10178, avg=91.36, stdev=686.90 00:16:04.272 clat (usec): min=1136, max=24281, avg=11721.97, stdev=1951.33 00:16:04.272 lat (usec): min=4238, max=24291, avg=11813.33, stdev=2056.04 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 5145], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10421], 00:16:04.272 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:16:04.272 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13698], 95.00th=[13960], 00:16:04.272 | 99.00th=[14746], 99.50th=[19006], 99.90th=[22152], 99.95th=[23987], 00:16:04.272 | 99.99th=[24249] 00:16:04.272 bw ( KiB/s): min=20688, max=22212, per=27.02%, avg=21450.00, stdev=1077.63, samples=2 00:16:04.272 iops : min= 5172, max= 5553, avg=5362.50, stdev=269.41, samples=2 00:16:04.272 lat (msec) : 2=0.01%, 10=13.08%, 20=85.81%, 50=1.10% 00:16:04.272 cpu : usr=4.17%, sys=14.41%, ctx=424, majf=0, minf=11 00:16:04.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:04.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.272 issued rwts: total=5120,5485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.272 job1: (groupid=0, jobs=1): err= 0: pid=86585: Mon Jul 15 02:18:03 2024 00:16:04.272 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:16:04.272 slat (usec): min=5, max=3822, avg=93.31, stdev=437.65 00:16:04.272 clat (usec): min=8760, max=15811, avg=11989.24, stdev=1180.24 00:16:04.272 lat (usec): min=8835, max=16707, avg=12082.55, stdev=1168.10 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11207], 00:16:04.272 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:16:04.272 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:16:04.272 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15401], 99.95th=[15533], 00:16:04.272 | 99.99th=[15795] 00:16:04.272 write: IOPS=5262, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1002msec); 0 zone resets 00:16:04.272 slat (usec): min=8, max=3849, avg=90.98, stdev=372.17 00:16:04.272 clat (usec): min=1126, max=16108, avg=12363.10, stdev=1569.06 00:16:04.272 lat (usec): min=1188, max=16190, avg=12454.08, stdev=1549.92 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 5538], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11600], 00:16:04.272 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:16:04.272 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13960], 00:16:04.272 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16057], 99.95th=[16057], 00:16:04.272 | 99.99th=[16057] 00:16:04.272 bw ( KiB/s): min=20616, max=20616, per=25.97%, avg=20616.00, stdev= 0.00, samples=1 00:16:04.272 iops : min= 5154, max= 5154, avg=5154.00, stdev= 0.00, samples=1 00:16:04.272 lat (msec) : 2=0.13%, 10=8.93%, 20=90.95% 00:16:04.272 cpu : usr=4.30%, sys=15.48%, ctx=720, majf=0, minf=9 00:16:04.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:04.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.272 issued rwts: total=5120,5273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.272 job2: (groupid=0, jobs=1): err= 0: pid=86586: Mon Jul 15 02:18:03 2024 00:16:04.272 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:16:04.272 slat (usec): min=8, max=6202, avg=101.83, stdev=557.57 00:16:04.272 clat (usec): min=5017, max=21355, avg=13604.97, stdev=1476.50 00:16:04.272 lat (usec): min=5031, max=21369, avg=13706.80, stdev=1531.10 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 8225], 5.00th=[11600], 10.00th=[12518], 20.00th=[13042], 00:16:04.272 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:16:04.272 | 70.00th=[13960], 80.00th=[14484], 90.00th=[15139], 95.00th=[15664], 00:16:04.272 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:16:04.272 | 99.99th=[21365] 00:16:04.272 write: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:16:04.272 slat (usec): min=11, max=6241, avg=106.50, stdev=616.90 00:16:04.272 clat (usec): min=2495, max=20097, avg=13866.18, stdev=1661.99 00:16:04.272 lat (usec): min=2530, max=20149, avg=13972.67, stdev=1638.87 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[12780], 20.00th=[13304], 00:16:04.272 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:16:04.272 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15008], 95.00th=[15139], 00:16:04.272 | 99.00th=[17171], 99.50th=[18744], 99.90th=[19530], 99.95th=[19792], 00:16:04.272 | 99.99th=[20055] 00:16:04.272 bw ( KiB/s): min=17344, max=19520, per=23.22%, avg=18432.00, stdev=1538.66, samples=2 00:16:04.272 iops : min= 4336, max= 4880, avg=4608.00, stdev=384.67, samples=2 00:16:04.272 lat (msec) : 4=0.13%, 10=4.48%, 20=95.37%, 50=0.02% 00:16:04.272 cpu : usr=3.59%, sys=14.07%, ctx=338, majf=0, minf=12 00:16:04.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:04.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.272 issued rwts: total=4608,4621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.272 job3: (groupid=0, jobs=1): err= 0: pid=86587: Mon Jul 15 02:18:03 2024 00:16:04.272 read: IOPS=4371, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1003msec) 00:16:04.272 slat (usec): min=6, max=3327, avg=105.48, stdev=467.36 00:16:04.272 clat (usec): min=430, max=17398, avg=13867.34, stdev=1459.03 00:16:04.272 lat (usec): min=3712, max=19013, avg=13972.82, stdev=1400.69 00:16:04.272 clat percentiles (usec): 00:16:04.272 | 1.00th=[ 7242], 5.00th=[11469], 10.00th=[11994], 20.00th=[13698], 00:16:04.272 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:16:04.273 | 70.00th=[14353], 80.00th=[14615], 90.00th=[14877], 95.00th=[15401], 00:16:04.273 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:16:04.273 | 99.99th=[17433] 00:16:04.273 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:16:04.273 slat (usec): min=11, max=3478, avg=108.77, stdev=457.17 00:16:04.273 clat (usec): min=10848, max=17212, avg=14254.91, stdev=1313.26 00:16:04.273 lat (usec): min=10872, max=17240, avg=14363.68, stdev=1287.59 00:16:04.273 clat percentiles (usec): 00:16:04.273 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:16:04.273 | 30.00th=[13566], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:16:04.273 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15664], 95.00th=[15926], 00:16:04.273 | 99.00th=[16581], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:16:04.273 | 99.99th=[17171] 00:16:04.273 bw ( KiB/s): min=17976, max=18888, per=23.22%, avg=18432.00, stdev=644.88, samples=2 00:16:04.273 iops : min= 4494, max= 4722, avg=4608.00, stdev=161.22, samples=2 00:16:04.273 lat (usec) : 500=0.01% 00:16:04.273 lat (msec) : 4=0.17%, 10=0.54%, 20=99.28% 00:16:04.273 cpu : usr=3.79%, sys=14.07%, ctx=666, majf=0, minf=19 00:16:04.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:04.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.273 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.273 00:16:04.273 Run status group 0 (all jobs): 00:16:04.273 READ: bw=74.6MiB/s (78.2MB/s), 17.1MiB/s-20.0MiB/s (17.9MB/s-20.9MB/s), io=75.1MiB (78.8MB), run=1002-1007msec 00:16:04.273 WRITE: bw=77.5MiB/s (81.3MB/s), 17.9MiB/s-21.3MiB/s (18.8MB/s-22.3MB/s), io=78.1MiB (81.9MB), run=1002-1007msec 00:16:04.273 00:16:04.273 Disk stats (read/write): 00:16:04.273 nvme0n1: ios=4392/4608, merge=0/0, ticks=50278/50707, in_queue=100985, util=87.45% 00:16:04.273 nvme0n2: ios=4263/4608, merge=0/0, ticks=15852/17081, in_queue=32933, util=87.56% 00:16:04.273 nvme0n3: ios=3773/4096, merge=0/0, ticks=23935/24280, in_queue=48215, util=89.20% 00:16:04.273 nvme0n4: ios=3598/4096, merge=0/0, ticks=11929/12584, in_queue=24513, util=89.67% 00:16:04.273 02:18:03 -- target/fio.sh@55 -- # sync 00:16:04.273 02:18:03 -- target/fio.sh@59 -- # fio_pid=86606 00:16:04.273 02:18:03 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:04.273 02:18:03 -- target/fio.sh@61 -- # sleep 3 00:16:04.273 [global] 00:16:04.273 thread=1 00:16:04.273 invalidate=1 00:16:04.273 rw=read 00:16:04.273 time_based=1 00:16:04.273 runtime=10 00:16:04.273 ioengine=libaio 00:16:04.273 direct=1 00:16:04.273 bs=4096 00:16:04.273 iodepth=1 00:16:04.273 norandommap=1 00:16:04.273 numjobs=1 00:16:04.273 00:16:04.273 [job0] 00:16:04.273 filename=/dev/nvme0n1 00:16:04.273 [job1] 00:16:04.273 filename=/dev/nvme0n2 00:16:04.273 [job2] 00:16:04.273 filename=/dev/nvme0n3 00:16:04.273 [job3] 00:16:04.273 filename=/dev/nvme0n4 00:16:04.273 Could not set queue depth (nvme0n1) 00:16:04.273 Could not set queue depth (nvme0n2) 00:16:04.273 Could not set queue depth (nvme0n3) 00:16:04.273 Could not set queue depth (nvme0n4) 00:16:04.273 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.273 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.273 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.273 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:04.273 fio-3.35 00:16:04.273 Starting 4 threads 00:16:07.559 02:18:06 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:07.559 fio: pid=86653, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.559 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=69586944, buflen=4096 00:16:07.559 02:18:06 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:07.559 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=74731520, buflen=4096 00:16:07.559 fio: pid=86652, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.559 02:18:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.559 02:18:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:07.817 fio: pid=86648, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.817 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=53170176, buflen=4096 00:16:07.817 02:18:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.817 02:18:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:08.076 fio: pid=86651, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:08.076 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59064320, buflen=4096 00:16:08.076 00:16:08.076 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86648: Mon Jul 15 02:18:07 2024 00:16:08.076 read: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(50.7MiB/3401msec) 00:16:08.076 slat (usec): min=10, max=12428, avg=16.26, stdev=165.27 00:16:08.076 clat (usec): min=118, max=2000, avg=244.17, stdev=30.60 00:16:08.076 lat (usec): min=131, max=12617, avg=260.44, stdev=168.08 00:16:08.076 clat percentiles (usec): 00:16:08.076 | 1.00th=[ 143], 5.00th=[ 212], 10.00th=[ 229], 20.00th=[ 237], 00:16:08.076 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:16:08.076 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:16:08.076 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 494], 99.95th=[ 693], 00:16:08.076 | 99.99th=[ 1336] 00:16:08.076 bw ( KiB/s): min=15144, max=15280, per=22.19%, avg=15230.67, stdev=52.44, samples=6 00:16:08.076 iops : min= 3786, max= 3820, avg=3807.67, stdev=13.11, samples=6 00:16:08.076 lat (usec) : 250=68.63%, 500=31.27%, 750=0.05%, 1000=0.02% 00:16:08.076 lat (msec) : 2=0.02%, 4=0.01% 00:16:08.076 cpu : usr=1.24%, sys=4.44%, ctx=13007, majf=0, minf=1 00:16:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 issued rwts: total=12982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.076 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86651: Mon Jul 15 02:18:07 2024 00:16:08.076 read: IOPS=3949, BW=15.4MiB/s (16.2MB/s)(56.3MiB/3651msec) 00:16:08.076 slat (usec): min=10, max=14214, avg=18.15, stdev=220.02 00:16:08.076 clat (usec): min=37, max=3357, avg=233.35, stdev=50.95 00:16:08.076 lat (usec): min=127, max=14500, avg=251.49, stdev=225.96 00:16:08.076 clat percentiles (usec): 00:16:08.076 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 153], 20.00th=[ 229], 00:16:08.076 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 245], 00:16:08.076 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:16:08.076 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 478], 99.95th=[ 685], 00:16:08.076 | 99.99th=[ 1975] 00:16:08.076 bw ( KiB/s): min=14944, max=18590, per=22.84%, avg=15674.00, stdev=1291.33, samples=7 00:16:08.076 iops : min= 3736, max= 4647, avg=3918.43, stdev=322.64, samples=7 00:16:08.076 lat (usec) : 50=0.01%, 250=71.33%, 500=28.56%, 750=0.05%, 1000=0.02% 00:16:08.076 lat (msec) : 2=0.01%, 4=0.01% 00:16:08.076 cpu : usr=1.29%, sys=4.60%, ctx=14469, majf=0, minf=1 00:16:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 issued rwts: total=14421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.076 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86652: Mon Jul 15 02:18:07 2024 00:16:08.076 read: IOPS=5759, BW=22.5MiB/s (23.6MB/s)(71.3MiB/3168msec) 00:16:08.076 slat (usec): min=12, max=7834, avg=16.72, stdev=79.06 00:16:08.076 clat (usec): min=89, max=2590, avg=155.32, stdev=36.34 00:16:08.076 lat (usec): min=145, max=8008, avg=172.05, stdev=87.26 00:16:08.076 clat percentiles (usec): 00:16:08.076 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:16:08.076 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:16:08.076 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:16:08.076 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 208], 99.95th=[ 265], 00:16:08.076 | 99.99th=[ 2573] 00:16:08.076 bw ( KiB/s): min=22368, max=23320, per=33.58%, avg=23043.00, stdev=380.33, samples=6 00:16:08.076 iops : min= 5592, max= 5830, avg=5760.67, stdev=95.14, samples=6 00:16:08.076 lat (usec) : 100=0.01%, 250=99.93%, 500=0.02%, 750=0.01%, 1000=0.01% 00:16:08.076 lat (msec) : 2=0.01%, 4=0.02% 00:16:08.076 cpu : usr=1.45%, sys=7.80%, ctx=18251, majf=0, minf=1 00:16:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 issued rwts: total=18246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.076 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86653: Mon Jul 15 02:18:07 2024 00:16:08.076 read: IOPS=5778, BW=22.6MiB/s (23.7MB/s)(66.4MiB/2940msec) 00:16:08.076 slat (nsec): min=13520, max=65403, avg=16049.39, stdev=2596.89 00:16:08.076 clat (usec): min=132, max=971, avg=155.44, stdev=17.12 00:16:08.076 lat (usec): min=146, max=989, avg=171.48, stdev=17.53 00:16:08.076 clat percentiles (usec): 00:16:08.076 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:16:08.076 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:16:08.076 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:16:08.076 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 375], 99.95th=[ 469], 00:16:08.076 | 99.99th=[ 766] 00:16:08.076 bw ( KiB/s): min=22784, max=23408, per=33.75%, avg=23158.40, stdev=248.28, samples=5 00:16:08.076 iops : min= 5696, max= 5852, avg=5789.60, stdev=62.07, samples=5 00:16:08.076 lat (usec) : 250=99.72%, 500=0.24%, 750=0.02%, 1000=0.01% 00:16:08.076 cpu : usr=2.14%, sys=7.18%, ctx=16990, majf=0, minf=1 00:16:08.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.076 issued rwts: total=16990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.076 00:16:08.076 Run status group 0 (all jobs): 00:16:08.076 READ: bw=67.0MiB/s (70.3MB/s), 14.9MiB/s-22.6MiB/s (15.6MB/s-23.7MB/s), io=245MiB (257MB), run=2940-3651msec 00:16:08.076 00:16:08.076 Disk stats (read/write): 00:16:08.076 nvme0n1: ios=12847/0, merge=0/0, ticks=3119/0, in_queue=3119, util=95.34% 00:16:08.076 nvme0n2: ios=14239/0, merge=0/0, ticks=3336/0, in_queue=3336, util=95.13% 00:16:08.076 nvme0n3: ios=17962/0, merge=0/0, ticks=2922/0, in_queue=2922, util=96.34% 00:16:08.076 nvme0n4: ios=16589/0, merge=0/0, ticks=2644/0, in_queue=2644, util=96.73% 00:16:08.076 02:18:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.076 02:18:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:08.335 02:18:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.335 02:18:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:08.592 02:18:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.592 02:18:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:08.849 02:18:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.849 02:18:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:09.106 02:18:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.106 02:18:08 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:09.364 02:18:08 -- target/fio.sh@69 -- # fio_status=0 00:16:09.364 02:18:08 -- target/fio.sh@70 -- # wait 86606 00:16:09.364 02:18:08 -- target/fio.sh@70 -- # fio_status=4 00:16:09.364 02:18:08 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.364 02:18:08 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:09.364 02:18:08 -- common/autotest_common.sh@1198 -- # local i=0 00:16:09.364 02:18:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:09.364 02:18:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.364 02:18:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:09.364 02:18:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.364 nvmf hotplug test: fio failed as expected 00:16:09.364 02:18:08 -- common/autotest_common.sh@1210 -- # return 0 00:16:09.364 02:18:08 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:09.364 02:18:08 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:09.364 02:18:08 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.623 02:18:09 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:09.623 02:18:09 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:09.623 02:18:09 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:09.623 02:18:09 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:09.623 02:18:09 -- target/fio.sh@91 -- # nvmftestfini 00:16:09.623 02:18:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:09.623 02:18:09 -- nvmf/common.sh@116 -- # sync 00:16:09.623 02:18:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:09.623 02:18:09 -- nvmf/common.sh@119 -- # set +e 00:16:09.623 02:18:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:09.623 02:18:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:09.623 rmmod nvme_tcp 00:16:09.623 rmmod nvme_fabrics 00:16:09.623 rmmod nvme_keyring 00:16:09.623 02:18:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.623 02:18:09 -- nvmf/common.sh@123 -- # set -e 00:16:09.623 02:18:09 -- nvmf/common.sh@124 -- # return 0 00:16:09.623 02:18:09 -- nvmf/common.sh@477 -- # '[' -n 86115 ']' 00:16:09.623 02:18:09 -- nvmf/common.sh@478 -- # killprocess 86115 00:16:09.623 02:18:09 -- common/autotest_common.sh@926 -- # '[' -z 86115 ']' 00:16:09.623 02:18:09 -- common/autotest_common.sh@930 -- # kill -0 86115 00:16:09.623 02:18:09 -- common/autotest_common.sh@931 -- # uname 00:16:09.623 02:18:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.623 02:18:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86115 00:16:09.623 killing process with pid 86115 00:16:09.623 02:18:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.623 02:18:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.623 02:18:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86115' 00:16:09.623 02:18:09 -- common/autotest_common.sh@945 -- # kill 86115 00:16:09.623 02:18:09 -- common/autotest_common.sh@950 -- # wait 86115 00:16:09.881 02:18:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:09.881 02:18:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:09.881 02:18:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:09.881 02:18:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.881 02:18:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:09.881 02:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.881 02:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.881 02:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.881 02:18:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:09.881 00:16:09.881 real 0m19.130s 00:16:09.881 user 1m12.409s 00:16:09.881 sys 0m9.703s 00:16:09.881 02:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.881 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:09.881 ************************************ 00:16:09.881 END TEST nvmf_fio_target 00:16:09.881 ************************************ 00:16:10.139 02:18:09 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:10.139 02:18:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:10.139 02:18:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.139 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:10.139 ************************************ 00:16:10.139 START TEST nvmf_bdevio 00:16:10.139 ************************************ 00:16:10.139 02:18:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:10.139 * Looking for test storage... 00:16:10.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.139 02:18:09 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.139 02:18:09 -- nvmf/common.sh@7 -- # uname -s 00:16:10.139 02:18:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.139 02:18:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.139 02:18:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.139 02:18:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.139 02:18:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.139 02:18:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.139 02:18:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.139 02:18:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.139 02:18:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.139 02:18:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.139 02:18:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:10.139 02:18:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:10.139 02:18:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.139 02:18:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.139 02:18:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.139 02:18:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.139 02:18:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.139 02:18:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.139 02:18:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.139 02:18:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.139 02:18:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.139 02:18:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.139 02:18:09 -- paths/export.sh@5 -- # export PATH 00:16:10.140 02:18:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.140 02:18:09 -- nvmf/common.sh@46 -- # : 0 00:16:10.140 02:18:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.140 02:18:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.140 02:18:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.140 02:18:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.140 02:18:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.140 02:18:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.140 02:18:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.140 02:18:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.140 02:18:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.140 02:18:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.140 02:18:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:10.140 02:18:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:10.140 02:18:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.140 02:18:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:10.140 02:18:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:10.140 02:18:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:10.140 02:18:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.140 02:18:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.140 02:18:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.140 02:18:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:10.140 02:18:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:10.140 02:18:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:10.140 02:18:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:10.140 02:18:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:10.140 02:18:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:10.140 02:18:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.140 02:18:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.140 02:18:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.140 02:18:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:10.140 02:18:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.140 02:18:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.140 02:18:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.140 02:18:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.140 02:18:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.140 02:18:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.140 02:18:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.140 02:18:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.140 02:18:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:10.140 02:18:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:10.140 Cannot find device "nvmf_tgt_br" 00:16:10.140 02:18:09 -- nvmf/common.sh@154 -- # true 00:16:10.140 02:18:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.140 Cannot find device "nvmf_tgt_br2" 00:16:10.140 02:18:09 -- nvmf/common.sh@155 -- # true 00:16:10.140 02:18:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:10.140 02:18:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:10.140 Cannot find device "nvmf_tgt_br" 00:16:10.140 02:18:09 -- nvmf/common.sh@157 -- # true 00:16:10.140 02:18:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:10.140 Cannot find device "nvmf_tgt_br2" 00:16:10.140 02:18:09 -- nvmf/common.sh@158 -- # true 00:16:10.140 02:18:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:10.140 02:18:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:10.398 02:18:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.398 02:18:09 -- nvmf/common.sh@161 -- # true 00:16:10.398 02:18:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.398 02:18:09 -- nvmf/common.sh@162 -- # true 00:16:10.398 02:18:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.398 02:18:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.398 02:18:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.398 02:18:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.398 02:18:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.398 02:18:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.398 02:18:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.398 02:18:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.398 02:18:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.398 02:18:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:10.398 02:18:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:10.398 02:18:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:10.398 02:18:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:10.398 02:18:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.398 02:18:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.398 02:18:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.398 02:18:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:10.398 02:18:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:10.398 02:18:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.398 02:18:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.398 02:18:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.398 02:18:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.398 02:18:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.398 02:18:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:10.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:10.398 00:16:10.398 --- 10.0.0.2 ping statistics --- 00:16:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.398 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:10.398 02:18:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:10.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:10.398 00:16:10.398 --- 10.0.0.3 ping statistics --- 00:16:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.398 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:10.399 02:18:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:16:10.399 00:16:10.399 --- 10.0.0.1 ping statistics --- 00:16:10.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.399 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:10.399 02:18:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.399 02:18:09 -- nvmf/common.sh@421 -- # return 0 00:16:10.399 02:18:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:10.399 02:18:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.399 02:18:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:10.399 02:18:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:10.399 02:18:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.399 02:18:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:10.399 02:18:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:10.399 02:18:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:10.399 02:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:10.399 02:18:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:10.399 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:10.399 02:18:09 -- nvmf/common.sh@469 -- # nvmfpid=86971 00:16:10.399 02:18:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:10.399 02:18:09 -- nvmf/common.sh@470 -- # waitforlisten 86971 00:16:10.399 02:18:09 -- common/autotest_common.sh@819 -- # '[' -z 86971 ']' 00:16:10.399 02:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.399 02:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.399 02:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.399 02:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.399 02:18:09 -- common/autotest_common.sh@10 -- # set +x 00:16:10.657 [2024-07-15 02:18:09.959770] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:10.657 [2024-07-15 02:18:09.959874] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.657 [2024-07-15 02:18:10.093884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.657 [2024-07-15 02:18:10.159342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.657 [2024-07-15 02:18:10.159510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.657 [2024-07-15 02:18:10.159523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.657 [2024-07-15 02:18:10.159531] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.657 [2024-07-15 02:18:10.159699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:10.657 [2024-07-15 02:18:10.159860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:10.657 [2024-07-15 02:18:10.160334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:10.657 [2024-07-15 02:18:10.160338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.590 02:18:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.590 02:18:10 -- common/autotest_common.sh@852 -- # return 0 00:16:11.590 02:18:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:11.590 02:18:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:11.590 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 02:18:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.590 02:18:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:11.590 02:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.590 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 [2024-07-15 02:18:10.941219] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.590 02:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.590 02:18:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:11.590 02:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.590 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 Malloc0 00:16:11.590 02:18:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.590 02:18:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:11.590 02:18:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.590 02:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.590 02:18:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.590 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.590 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.590 02:18:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.590 02:18:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.590 02:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:11.590 [2024-07-15 02:18:11.016786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.590 02:18:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.590 02:18:11 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:11.590 02:18:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:11.590 02:18:11 -- nvmf/common.sh@520 -- # config=() 00:16:11.590 02:18:11 -- nvmf/common.sh@520 -- # local subsystem config 00:16:11.590 02:18:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:11.590 02:18:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:11.590 { 00:16:11.590 "params": { 00:16:11.590 "name": "Nvme$subsystem", 00:16:11.590 "trtype": "$TEST_TRANSPORT", 00:16:11.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.590 "adrfam": "ipv4", 00:16:11.590 "trsvcid": "$NVMF_PORT", 00:16:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.590 "hdgst": ${hdgst:-false}, 00:16:11.590 "ddgst": ${ddgst:-false} 00:16:11.590 }, 00:16:11.590 "method": "bdev_nvme_attach_controller" 00:16:11.590 } 00:16:11.590 EOF 00:16:11.590 )") 00:16:11.590 02:18:11 -- nvmf/common.sh@542 -- # cat 00:16:11.590 02:18:11 -- nvmf/common.sh@544 -- # jq . 00:16:11.590 02:18:11 -- nvmf/common.sh@545 -- # IFS=, 00:16:11.590 02:18:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:11.590 "params": { 00:16:11.590 "name": "Nvme1", 00:16:11.590 "trtype": "tcp", 00:16:11.590 "traddr": "10.0.0.2", 00:16:11.590 "adrfam": "ipv4", 00:16:11.590 "trsvcid": "4420", 00:16:11.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:11.590 "hdgst": false, 00:16:11.590 "ddgst": false 00:16:11.590 }, 00:16:11.590 "method": "bdev_nvme_attach_controller" 00:16:11.590 }' 00:16:11.590 [2024-07-15 02:18:11.068448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:11.590 [2024-07-15 02:18:11.068547] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87025 ] 00:16:11.847 [2024-07-15 02:18:11.202936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.847 [2024-07-15 02:18:11.273010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.847 [2024-07-15 02:18:11.273184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.847 [2024-07-15 02:18:11.273194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.104 [2024-07-15 02:18:11.443072] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:12.104 [2024-07-15 02:18:11.443142] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:12.104 I/O targets: 00:16:12.104 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:12.104 00:16:12.104 00:16:12.104 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.104 http://cunit.sourceforge.net/ 00:16:12.104 00:16:12.104 00:16:12.104 Suite: bdevio tests on: Nvme1n1 00:16:12.104 Test: blockdev write read block ...passed 00:16:12.104 Test: blockdev write zeroes read block ...passed 00:16:12.104 Test: blockdev write zeroes read no split ...passed 00:16:12.104 Test: blockdev write zeroes read split ...passed 00:16:12.104 Test: blockdev write zeroes read split partial ...passed 00:16:12.104 Test: blockdev reset ...[2024-07-15 02:18:11.555906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:12.104 [2024-07-15 02:18:11.556006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef2210 (9): Bad file descriptor 00:16:12.104 [2024-07-15 02:18:11.573146] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.104 passed 00:16:12.104 Test: blockdev write read 8 blocks ...passed 00:16:12.104 Test: blockdev write read size > 128k ...passed 00:16:12.104 Test: blockdev write read invalid size ...passed 00:16:12.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:12.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:12.104 Test: blockdev write read max offset ...passed 00:16:12.362 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:12.362 Test: blockdev writev readv 8 blocks ...passed 00:16:12.362 Test: blockdev writev readv 30 x 1block ...passed 00:16:12.362 Test: blockdev writev readv block ...passed 00:16:12.362 Test: blockdev writev readv size > 128k ...passed 00:16:12.362 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:12.362 Test: blockdev comparev and writev ...[2024-07-15 02:18:11.745365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.745421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.745441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.745453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.745733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.745751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.745768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.746041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.746058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.746074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.746085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.746342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.746358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.746375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:12.362 [2024-07-15 02:18:11.746385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:12.362 passed 00:16:12.362 Test: blockdev nvme passthru rw ...passed 00:16:12.362 Test: blockdev nvme passthru vendor specific ...[2024-07-15 02:18:11.828906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.362 [2024-07-15 02:18:11.828936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.829063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.362 [2024-07-15 02:18:11.829079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:12.362 [2024-07-15 02:18:11.829188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.362 [2024-07-15 02:18:11.829202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:12.362 passed 00:16:12.362 Test: blockdev nvme admin passthru ...[2024-07-15 02:18:11.829306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:12.362 [2024-07-15 02:18:11.829321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:12.362 passed 00:16:12.362 Test: blockdev copy ...passed 00:16:12.362 00:16:12.362 Run Summary: Type Total Ran Passed Failed Inactive 00:16:12.362 suites 1 1 n/a 0 0 00:16:12.362 tests 23 23 23 0 0 00:16:12.362 asserts 152 152 152 0 n/a 00:16:12.362 00:16:12.362 Elapsed time = 0.889 seconds 00:16:12.635 02:18:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.635 02:18:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.635 02:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:12.635 02:18:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.635 02:18:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:12.635 02:18:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:12.635 02:18:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:12.635 02:18:12 -- nvmf/common.sh@116 -- # sync 00:16:12.635 02:18:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:12.636 02:18:12 -- nvmf/common.sh@119 -- # set +e 00:16:12.636 02:18:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:12.636 02:18:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:12.636 rmmod nvme_tcp 00:16:12.636 rmmod nvme_fabrics 00:16:12.636 rmmod nvme_keyring 00:16:12.906 02:18:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:12.906 02:18:12 -- nvmf/common.sh@123 -- # set -e 00:16:12.906 02:18:12 -- nvmf/common.sh@124 -- # return 0 00:16:12.906 02:18:12 -- nvmf/common.sh@477 -- # '[' -n 86971 ']' 00:16:12.906 02:18:12 -- nvmf/common.sh@478 -- # killprocess 86971 00:16:12.906 02:18:12 -- common/autotest_common.sh@926 -- # '[' -z 86971 ']' 00:16:12.906 02:18:12 -- common/autotest_common.sh@930 -- # kill -0 86971 00:16:12.906 02:18:12 -- common/autotest_common.sh@931 -- # uname 00:16:12.906 02:18:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:12.906 02:18:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86971 00:16:12.906 02:18:12 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:12.906 02:18:12 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:12.906 killing process with pid 86971 00:16:12.906 02:18:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86971' 00:16:12.906 02:18:12 -- common/autotest_common.sh@945 -- # kill 86971 00:16:12.906 02:18:12 -- common/autotest_common.sh@950 -- # wait 86971 00:16:12.906 02:18:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.906 02:18:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.906 02:18:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.906 02:18:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.906 02:18:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.906 02:18:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.906 02:18:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.906 02:18:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.165 02:18:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.165 00:16:13.165 real 0m3.008s 00:16:13.165 user 0m10.949s 00:16:13.165 sys 0m0.756s 00:16:13.165 02:18:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.165 02:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:13.165 ************************************ 00:16:13.165 END TEST nvmf_bdevio 00:16:13.165 ************************************ 00:16:13.165 02:18:12 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:13.165 02:18:12 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:13.165 02:18:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:13.165 02:18:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:13.165 02:18:12 -- common/autotest_common.sh@10 -- # set +x 00:16:13.165 ************************************ 00:16:13.165 START TEST nvmf_bdevio_no_huge 00:16:13.165 ************************************ 00:16:13.165 02:18:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:13.165 * Looking for test storage... 00:16:13.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.165 02:18:12 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.165 02:18:12 -- nvmf/common.sh@7 -- # uname -s 00:16:13.165 02:18:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.165 02:18:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.165 02:18:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.165 02:18:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.165 02:18:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.165 02:18:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.165 02:18:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.165 02:18:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.165 02:18:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.165 02:18:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:13.165 02:18:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:13.165 02:18:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.165 02:18:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.165 02:18:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.165 02:18:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.165 02:18:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.165 02:18:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.165 02:18:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.165 02:18:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.165 02:18:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.165 02:18:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.165 02:18:12 -- paths/export.sh@5 -- # export PATH 00:16:13.165 02:18:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.165 02:18:12 -- nvmf/common.sh@46 -- # : 0 00:16:13.165 02:18:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:13.165 02:18:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:13.165 02:18:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:13.165 02:18:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.165 02:18:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.165 02:18:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:13.165 02:18:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:13.165 02:18:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:13.165 02:18:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.165 02:18:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.165 02:18:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:13.165 02:18:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:13.165 02:18:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.165 02:18:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:13.165 02:18:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:13.165 02:18:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:13.165 02:18:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.165 02:18:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.165 02:18:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.165 02:18:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:13.165 02:18:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:13.165 02:18:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.165 02:18:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.165 02:18:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:13.165 02:18:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:13.165 02:18:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.165 02:18:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.165 02:18:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.165 02:18:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.165 02:18:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.165 02:18:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.165 02:18:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.165 02:18:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.165 02:18:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:13.165 02:18:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:13.165 Cannot find device "nvmf_tgt_br" 00:16:13.165 02:18:12 -- nvmf/common.sh@154 -- # true 00:16:13.165 02:18:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.165 Cannot find device "nvmf_tgt_br2" 00:16:13.165 02:18:12 -- nvmf/common.sh@155 -- # true 00:16:13.165 02:18:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:13.165 02:18:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:13.165 Cannot find device "nvmf_tgt_br" 00:16:13.165 02:18:12 -- nvmf/common.sh@157 -- # true 00:16:13.165 02:18:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:13.165 Cannot find device "nvmf_tgt_br2" 00:16:13.165 02:18:12 -- nvmf/common.sh@158 -- # true 00:16:13.165 02:18:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:13.425 02:18:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:13.425 02:18:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.425 02:18:12 -- nvmf/common.sh@161 -- # true 00:16:13.425 02:18:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.425 02:18:12 -- nvmf/common.sh@162 -- # true 00:16:13.425 02:18:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.425 02:18:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.425 02:18:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.425 02:18:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.425 02:18:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.425 02:18:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.425 02:18:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.425 02:18:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.425 02:18:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.425 02:18:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:13.425 02:18:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:13.425 02:18:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:13.425 02:18:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:13.425 02:18:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.425 02:18:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.425 02:18:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.425 02:18:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:13.425 02:18:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:13.425 02:18:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.425 02:18:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.425 02:18:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.425 02:18:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.425 02:18:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.425 02:18:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:13.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:13.425 00:16:13.425 --- 10.0.0.2 ping statistics --- 00:16:13.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.425 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:13.425 02:18:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:13.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:16:13.425 00:16:13.425 --- 10.0.0.3 ping statistics --- 00:16:13.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.425 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:13.425 02:18:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:13.684 00:16:13.684 --- 10.0.0.1 ping statistics --- 00:16:13.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.684 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:13.684 02:18:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.684 02:18:12 -- nvmf/common.sh@421 -- # return 0 00:16:13.684 02:18:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:13.684 02:18:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.684 02:18:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:13.684 02:18:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:13.684 02:18:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.684 02:18:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:13.684 02:18:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:13.684 02:18:13 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:13.684 02:18:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:13.684 02:18:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:13.684 02:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:13.684 02:18:13 -- nvmf/common.sh@469 -- # nvmfpid=87204 00:16:13.684 02:18:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:13.684 02:18:13 -- nvmf/common.sh@470 -- # waitforlisten 87204 00:16:13.684 02:18:13 -- common/autotest_common.sh@819 -- # '[' -z 87204 ']' 00:16:13.684 02:18:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.684 02:18:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:13.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.684 02:18:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.684 02:18:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:13.684 02:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:13.684 [2024-07-15 02:18:13.056494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:13.684 [2024-07-15 02:18:13.056576] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:13.684 [2024-07-15 02:18:13.190139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.942 [2024-07-15 02:18:13.276309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:13.942 [2024-07-15 02:18:13.276447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.942 [2024-07-15 02:18:13.276471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.942 [2024-07-15 02:18:13.276479] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.942 [2024-07-15 02:18:13.276685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:13.942 [2024-07-15 02:18:13.276740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:13.942 [2024-07-15 02:18:13.278884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:13.942 [2024-07-15 02:18:13.278893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.508 02:18:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:14.508 02:18:14 -- common/autotest_common.sh@852 -- # return 0 00:16:14.508 02:18:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.508 02:18:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:14.508 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 02:18:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.767 02:18:14 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.767 02:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.767 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 [2024-07-15 02:18:14.083712] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.767 02:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.767 02:18:14 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.767 02:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.767 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 Malloc0 00:16:14.767 02:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.767 02:18:14 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.767 02:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.767 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 02:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.767 02:18:14 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.767 02:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.767 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 02:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.767 02:18:14 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.767 02:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.767 02:18:14 -- common/autotest_common.sh@10 -- # set +x 00:16:14.767 [2024-07-15 02:18:14.122133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.767 02:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.767 02:18:14 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:14.767 02:18:14 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:14.767 02:18:14 -- nvmf/common.sh@520 -- # config=() 00:16:14.767 02:18:14 -- nvmf/common.sh@520 -- # local subsystem config 00:16:14.767 02:18:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:14.767 02:18:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:14.767 { 00:16:14.767 "params": { 00:16:14.767 "name": "Nvme$subsystem", 00:16:14.767 "trtype": "$TEST_TRANSPORT", 00:16:14.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.767 "adrfam": "ipv4", 00:16:14.767 "trsvcid": "$NVMF_PORT", 00:16:14.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.767 "hdgst": ${hdgst:-false}, 00:16:14.767 "ddgst": ${ddgst:-false} 00:16:14.767 }, 00:16:14.767 "method": "bdev_nvme_attach_controller" 00:16:14.767 } 00:16:14.767 EOF 00:16:14.767 )") 00:16:14.767 02:18:14 -- nvmf/common.sh@542 -- # cat 00:16:14.767 02:18:14 -- nvmf/common.sh@544 -- # jq . 00:16:14.767 02:18:14 -- nvmf/common.sh@545 -- # IFS=, 00:16:14.767 02:18:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:14.767 "params": { 00:16:14.767 "name": "Nvme1", 00:16:14.767 "trtype": "tcp", 00:16:14.767 "traddr": "10.0.0.2", 00:16:14.767 "adrfam": "ipv4", 00:16:14.767 "trsvcid": "4420", 00:16:14.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.767 "hdgst": false, 00:16:14.767 "ddgst": false 00:16:14.767 }, 00:16:14.767 "method": "bdev_nvme_attach_controller" 00:16:14.767 }' 00:16:14.767 [2024-07-15 02:18:14.178463] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:14.768 [2024-07-15 02:18:14.178559] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87258 ] 00:16:14.768 [2024-07-15 02:18:14.317750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.026 [2024-07-15 02:18:14.452310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.026 [2024-07-15 02:18:14.452448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.026 [2024-07-15 02:18:14.452454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.285 [2024-07-15 02:18:14.644410] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:15.285 [2024-07-15 02:18:14.644449] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:15.285 I/O targets: 00:16:15.285 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:15.285 00:16:15.285 00:16:15.285 CUnit - A unit testing framework for C - Version 2.1-3 00:16:15.285 http://cunit.sourceforge.net/ 00:16:15.285 00:16:15.285 00:16:15.285 Suite: bdevio tests on: Nvme1n1 00:16:15.285 Test: blockdev write read block ...passed 00:16:15.285 Test: blockdev write zeroes read block ...passed 00:16:15.285 Test: blockdev write zeroes read no split ...passed 00:16:15.285 Test: blockdev write zeroes read split ...passed 00:16:15.285 Test: blockdev write zeroes read split partial ...passed 00:16:15.285 Test: blockdev reset ...[2024-07-15 02:18:14.770438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:15.285 [2024-07-15 02:18:14.770525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa83e0 (9): Bad file descriptor 00:16:15.285 passed 00:16:15.285 Test: blockdev write read 8 blocks ...[2024-07-15 02:18:14.784576] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:15.285 passed 00:16:15.285 Test: blockdev write read size > 128k ...passed 00:16:15.285 Test: blockdev write read invalid size ...passed 00:16:15.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:15.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:15.285 Test: blockdev write read max offset ...passed 00:16:15.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:15.544 Test: blockdev writev readv 8 blocks ...passed 00:16:15.544 Test: blockdev writev readv 30 x 1block ...passed 00:16:15.544 Test: blockdev writev readv block ...passed 00:16:15.544 Test: blockdev writev readv size > 128k ...passed 00:16:15.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:15.544 Test: blockdev comparev and writev ...[2024-07-15 02:18:14.957023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.957100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.957468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.957506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.957804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.957837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.958119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.958136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:14.958151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:15.544 [2024-07-15 02:18:14.958161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:15.544 passed 00:16:15.544 Test: blockdev nvme passthru rw ...passed 00:16:15.544 Test: blockdev nvme passthru vendor specific ...passed 00:16:15.544 Test: blockdev nvme admin passthru ...[2024-07-15 02:18:15.040897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.544 [2024-07-15 02:18:15.040925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:15.041045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.544 [2024-07-15 02:18:15.041060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:15.544 [2024-07-15 02:18:15.041172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.545 [2024-07-15 02:18:15.041186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:15.545 [2024-07-15 02:18:15.041302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.545 [2024-07-15 02:18:15.041317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:15.545 passed 00:16:15.803 Test: blockdev copy ...passed 00:16:15.803 00:16:15.803 Run Summary: Type Total Ran Passed Failed Inactive 00:16:15.803 suites 1 1 n/a 0 0 00:16:15.803 tests 23 23 23 0 0 00:16:15.803 asserts 152 152 152 0 n/a 00:16:15.803 00:16:15.803 Elapsed time = 0.921 seconds 00:16:16.062 02:18:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.062 02:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.062 02:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:16.062 02:18:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.062 02:18:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:16.062 02:18:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:16.062 02:18:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.062 02:18:15 -- nvmf/common.sh@116 -- # sync 00:16:16.062 02:18:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.062 02:18:15 -- nvmf/common.sh@119 -- # set +e 00:16:16.062 02:18:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.062 02:18:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.062 rmmod nvme_tcp 00:16:16.062 rmmod nvme_fabrics 00:16:16.062 rmmod nvme_keyring 00:16:16.062 02:18:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.062 02:18:15 -- nvmf/common.sh@123 -- # set -e 00:16:16.062 02:18:15 -- nvmf/common.sh@124 -- # return 0 00:16:16.062 02:18:15 -- nvmf/common.sh@477 -- # '[' -n 87204 ']' 00:16:16.062 02:18:15 -- nvmf/common.sh@478 -- # killprocess 87204 00:16:16.062 02:18:15 -- common/autotest_common.sh@926 -- # '[' -z 87204 ']' 00:16:16.062 02:18:15 -- common/autotest_common.sh@930 -- # kill -0 87204 00:16:16.062 02:18:15 -- common/autotest_common.sh@931 -- # uname 00:16:16.062 02:18:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.062 02:18:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87204 00:16:16.062 02:18:15 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:16.062 02:18:15 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:16.062 killing process with pid 87204 00:16:16.062 02:18:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87204' 00:16:16.062 02:18:15 -- common/autotest_common.sh@945 -- # kill 87204 00:16:16.062 02:18:15 -- common/autotest_common.sh@950 -- # wait 87204 00:16:16.321 02:18:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.321 02:18:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.321 02:18:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.321 02:18:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.321 02:18:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.321 02:18:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.321 02:18:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.321 02:18:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.581 02:18:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:16.581 ************************************ 00:16:16.581 END TEST nvmf_bdevio_no_huge 00:16:16.581 ************************************ 00:16:16.581 00:16:16.581 real 0m3.393s 00:16:16.581 user 0m12.211s 00:16:16.581 sys 0m1.260s 00:16:16.581 02:18:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.581 02:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:16.581 02:18:15 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:16.581 02:18:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:16.581 02:18:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.581 02:18:15 -- common/autotest_common.sh@10 -- # set +x 00:16:16.581 ************************************ 00:16:16.581 START TEST nvmf_tls 00:16:16.581 ************************************ 00:16:16.581 02:18:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:16.581 * Looking for test storage... 00:16:16.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.581 02:18:16 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.581 02:18:16 -- nvmf/common.sh@7 -- # uname -s 00:16:16.581 02:18:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.581 02:18:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.581 02:18:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.581 02:18:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.581 02:18:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.581 02:18:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.581 02:18:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.581 02:18:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.581 02:18:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.581 02:18:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:16.581 02:18:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:16:16.581 02:18:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.581 02:18:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.581 02:18:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.581 02:18:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.581 02:18:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.581 02:18:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.581 02:18:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.581 02:18:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.581 02:18:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.581 02:18:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.581 02:18:16 -- paths/export.sh@5 -- # export PATH 00:16:16.581 02:18:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.581 02:18:16 -- nvmf/common.sh@46 -- # : 0 00:16:16.581 02:18:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.581 02:18:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.581 02:18:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.581 02:18:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.581 02:18:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.581 02:18:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.581 02:18:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.581 02:18:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.581 02:18:16 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.581 02:18:16 -- target/tls.sh@71 -- # nvmftestinit 00:16:16.581 02:18:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.581 02:18:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.581 02:18:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.581 02:18:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.581 02:18:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.581 02:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.581 02:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.581 02:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.581 02:18:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:16.581 02:18:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:16.581 02:18:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.581 02:18:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.581 02:18:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.581 02:18:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:16.581 02:18:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.581 02:18:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.581 02:18:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.581 02:18:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.581 02:18:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.582 02:18:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.582 02:18:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.582 02:18:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.582 02:18:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:16.582 02:18:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:16.582 Cannot find device "nvmf_tgt_br" 00:16:16.582 02:18:16 -- nvmf/common.sh@154 -- # true 00:16:16.582 02:18:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.582 Cannot find device "nvmf_tgt_br2" 00:16:16.582 02:18:16 -- nvmf/common.sh@155 -- # true 00:16:16.582 02:18:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:16.582 02:18:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:16.582 Cannot find device "nvmf_tgt_br" 00:16:16.582 02:18:16 -- nvmf/common.sh@157 -- # true 00:16:16.582 02:18:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:16.582 Cannot find device "nvmf_tgt_br2" 00:16:16.582 02:18:16 -- nvmf/common.sh@158 -- # true 00:16:16.582 02:18:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.840 02:18:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.840 02:18:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.840 02:18:16 -- nvmf/common.sh@161 -- # true 00:16:16.840 02:18:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.840 02:18:16 -- nvmf/common.sh@162 -- # true 00:16:16.840 02:18:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.840 02:18:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.840 02:18:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.840 02:18:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.840 02:18:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.841 02:18:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.841 02:18:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.841 02:18:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.841 02:18:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.841 02:18:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.841 02:18:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.841 02:18:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.841 02:18:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.841 02:18:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.841 02:18:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.841 02:18:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.841 02:18:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:16.841 02:18:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:16.841 02:18:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.841 02:18:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.841 02:18:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.100 02:18:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.100 02:18:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.100 02:18:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:17.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:16:17.100 00:16:17.100 --- 10.0.0.2 ping statistics --- 00:16:17.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.100 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:16:17.100 02:18:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:16:17.100 00:16:17.100 --- 10.0.0.3 ping statistics --- 00:16:17.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.100 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:17.100 02:18:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:17.100 00:16:17.100 --- 10.0.0.1 ping statistics --- 00:16:17.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.100 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:17.100 02:18:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.100 02:18:16 -- nvmf/common.sh@421 -- # return 0 00:16:17.100 02:18:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.100 02:18:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.100 02:18:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.100 02:18:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.100 02:18:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.100 02:18:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.100 02:18:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.100 02:18:16 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:17.100 02:18:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.100 02:18:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:17.100 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 02:18:16 -- nvmf/common.sh@469 -- # nvmfpid=87439 00:16:17.100 02:18:16 -- nvmf/common.sh@470 -- # waitforlisten 87439 00:16:17.100 02:18:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:17.100 02:18:16 -- common/autotest_common.sh@819 -- # '[' -z 87439 ']' 00:16:17.100 02:18:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.100 02:18:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.100 02:18:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.100 02:18:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.100 02:18:16 -- common/autotest_common.sh@10 -- # set +x 00:16:17.100 [2024-07-15 02:18:16.501447] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:17.100 [2024-07-15 02:18:16.501544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.100 [2024-07-15 02:18:16.644254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.359 [2024-07-15 02:18:16.726555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.359 [2024-07-15 02:18:16.726812] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.359 [2024-07-15 02:18:16.726840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.359 [2024-07-15 02:18:16.726859] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.359 [2024-07-15 02:18:16.726915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.925 02:18:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.925 02:18:17 -- common/autotest_common.sh@852 -- # return 0 00:16:17.925 02:18:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.925 02:18:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.925 02:18:17 -- common/autotest_common.sh@10 -- # set +x 00:16:17.925 02:18:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.925 02:18:17 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:17.925 02:18:17 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:18.183 true 00:16:18.183 02:18:17 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:18.183 02:18:17 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.441 02:18:17 -- target/tls.sh@82 -- # version=0 00:16:18.441 02:18:17 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:18.441 02:18:17 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:18.699 02:18:18 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:18.699 02:18:18 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.957 02:18:18 -- target/tls.sh@90 -- # version=13 00:16:18.957 02:18:18 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:18.957 02:18:18 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:19.216 02:18:18 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.216 02:18:18 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:19.474 02:18:18 -- target/tls.sh@98 -- # version=7 00:16:19.474 02:18:18 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:19.474 02:18:18 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.474 02:18:18 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:19.733 02:18:19 -- target/tls.sh@105 -- # ktls=false 00:16:19.733 02:18:19 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:19.733 02:18:19 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:19.991 02:18:19 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.991 02:18:19 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:20.250 02:18:19 -- target/tls.sh@113 -- # ktls=true 00:16:20.250 02:18:19 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:20.250 02:18:19 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:20.510 02:18:19 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:20.510 02:18:19 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:20.769 02:18:20 -- target/tls.sh@121 -- # ktls=false 00:16:20.769 02:18:20 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:20.769 02:18:20 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:20.769 02:18:20 -- target/tls.sh@49 -- # local key hash crc 00:16:20.769 02:18:20 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:20.769 02:18:20 -- target/tls.sh@51 -- # hash=01 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # gzip -1 -c 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # tail -c8 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # head -c 4 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # crc='p$H�' 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:20.769 02:18:20 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:20.769 02:18:20 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:20.769 02:18:20 -- target/tls.sh@49 -- # local key hash crc 00:16:20.769 02:18:20 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:20.769 02:18:20 -- target/tls.sh@51 -- # hash=01 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # gzip -1 -c 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # tail -c8 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # head -c 4 00:16:20.769 02:18:20 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:20.769 02:18:20 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:20.769 02:18:20 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:20.770 02:18:20 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:20.770 02:18:20 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:20.770 02:18:20 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:20.770 02:18:20 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:20.770 02:18:20 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:20.770 02:18:20 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:20.770 02:18:20 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:21.029 02:18:20 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:21.288 02:18:20 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:21.288 02:18:20 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:21.288 02:18:20 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:21.547 [2024-07-15 02:18:20.948100] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.547 02:18:20 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:21.805 02:18:21 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:22.064 [2024-07-15 02:18:21.416245] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:22.064 [2024-07-15 02:18:21.416437] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.064 02:18:21 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:22.323 malloc0 00:16:22.323 02:18:21 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:22.582 02:18:21 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:22.840 02:18:22 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.839 Initializing NVMe Controllers 00:16:32.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.839 Initialization complete. Launching workers. 00:16:32.839 ======================================================== 00:16:32.839 Latency(us) 00:16:32.839 Device Information : IOPS MiB/s Average min max 00:16:32.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11286.69 44.09 5671.60 1561.74 9498.57 00:16:32.839 ======================================================== 00:16:32.839 Total : 11286.69 44.09 5671.60 1561.74 9498.57 00:16:32.839 00:16:32.839 02:18:32 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:32.839 02:18:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.839 02:18:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:32.839 02:18:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.839 02:18:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:32.839 02:18:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.839 02:18:32 -- target/tls.sh@28 -- # bdevperf_pid=87810 00:16:32.839 02:18:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.839 02:18:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.839 02:18:32 -- target/tls.sh@31 -- # waitforlisten 87810 /var/tmp/bdevperf.sock 00:16:32.839 02:18:32 -- common/autotest_common.sh@819 -- # '[' -z 87810 ']' 00:16:32.839 02:18:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.839 02:18:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.839 02:18:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.839 02:18:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.839 02:18:32 -- common/autotest_common.sh@10 -- # set +x 00:16:33.097 [2024-07-15 02:18:32.405173] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:33.097 [2024-07-15 02:18:32.405291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87810 ] 00:16:33.097 [2024-07-15 02:18:32.546477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.097 [2024-07-15 02:18:32.623900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.056 02:18:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.056 02:18:33 -- common/autotest_common.sh@852 -- # return 0 00:16:34.056 02:18:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:34.056 [2024-07-15 02:18:33.558926] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:34.313 TLSTESTn1 00:16:34.313 02:18:33 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:34.313 Running I/O for 10 seconds... 00:16:44.283 00:16:44.283 Latency(us) 00:16:44.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.283 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:44.283 Verification LBA range: start 0x0 length 0x2000 00:16:44.283 TLSTESTn1 : 10.01 6200.82 24.22 0.00 0.00 20608.52 4379.00 21448.15 00:16:44.283 =================================================================================================================== 00:16:44.283 Total : 6200.82 24.22 0.00 0.00 20608.52 4379.00 21448.15 00:16:44.283 0 00:16:44.283 02:18:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.283 02:18:43 -- target/tls.sh@45 -- # killprocess 87810 00:16:44.283 02:18:43 -- common/autotest_common.sh@926 -- # '[' -z 87810 ']' 00:16:44.283 02:18:43 -- common/autotest_common.sh@930 -- # kill -0 87810 00:16:44.283 02:18:43 -- common/autotest_common.sh@931 -- # uname 00:16:44.283 02:18:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.283 02:18:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87810 00:16:44.283 02:18:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:44.283 02:18:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:44.283 killing process with pid 87810 00:16:44.283 02:18:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87810' 00:16:44.283 02:18:43 -- common/autotest_common.sh@945 -- # kill 87810 00:16:44.283 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.283 00:16:44.283 Latency(us) 00:16:44.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.283 =================================================================================================================== 00:16:44.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.283 02:18:43 -- common/autotest_common.sh@950 -- # wait 87810 00:16:44.542 02:18:43 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:44.542 02:18:43 -- common/autotest_common.sh@640 -- # local es=0 00:16:44.542 02:18:43 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:44.542 02:18:43 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:44.542 02:18:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:44.542 02:18:43 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:44.542 02:18:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:44.542 02:18:43 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:44.542 02:18:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:44.542 02:18:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:44.542 02:18:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:44.542 02:18:43 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:16:44.542 02:18:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:44.542 02:18:43 -- target/tls.sh@28 -- # bdevperf_pid=87959 00:16:44.542 02:18:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:44.542 02:18:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:44.542 02:18:43 -- target/tls.sh@31 -- # waitforlisten 87959 /var/tmp/bdevperf.sock 00:16:44.542 02:18:43 -- common/autotest_common.sh@819 -- # '[' -z 87959 ']' 00:16:44.542 02:18:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.542 02:18:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.542 02:18:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.542 02:18:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.542 02:18:44 -- common/autotest_common.sh@10 -- # set +x 00:16:44.542 [2024-07-15 02:18:44.050212] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:44.542 [2024-07-15 02:18:44.050312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87959 ] 00:16:44.801 [2024-07-15 02:18:44.192720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.801 [2024-07-15 02:18:44.267901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.738 02:18:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.738 02:18:44 -- common/autotest_common.sh@852 -- # return 0 00:16:45.738 02:18:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:45.738 [2024-07-15 02:18:45.160020] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.738 [2024-07-15 02:18:45.166887] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.738 [2024-07-15 02:18:45.167705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a1db0 (107): Transport endpoint is not connected 00:16:45.738 [2024-07-15 02:18:45.168707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a1db0 (9): Bad file descriptor 00:16:45.738 [2024-07-15 02:18:45.169694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.738 [2024-07-15 02:18:45.169734] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.738 [2024-07-15 02:18:45.169746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.738 2024/07/15 02:18:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:45.738 request: 00:16:45.738 { 00:16:45.738 "method": "bdev_nvme_attach_controller", 00:16:45.738 "params": { 00:16:45.738 "name": "TLSTEST", 00:16:45.738 "trtype": "tcp", 00:16:45.738 "traddr": "10.0.0.2", 00:16:45.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.738 "adrfam": "ipv4", 00:16:45.738 "trsvcid": "4420", 00:16:45.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.738 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:16:45.738 } 00:16:45.738 } 00:16:45.738 Got JSON-RPC error response 00:16:45.738 GoRPCClient: error on JSON-RPC call 00:16:45.738 02:18:45 -- target/tls.sh@36 -- # killprocess 87959 00:16:45.738 02:18:45 -- common/autotest_common.sh@926 -- # '[' -z 87959 ']' 00:16:45.738 02:18:45 -- common/autotest_common.sh@930 -- # kill -0 87959 00:16:45.738 02:18:45 -- common/autotest_common.sh@931 -- # uname 00:16:45.738 02:18:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:45.738 02:18:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87959 00:16:45.738 killing process with pid 87959 00:16:45.738 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.738 00:16:45.738 Latency(us) 00:16:45.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.738 =================================================================================================================== 00:16:45.738 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.738 02:18:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:45.738 02:18:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:45.738 02:18:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87959' 00:16:45.738 02:18:45 -- common/autotest_common.sh@945 -- # kill 87959 00:16:45.738 02:18:45 -- common/autotest_common.sh@950 -- # wait 87959 00:16:45.997 02:18:45 -- target/tls.sh@37 -- # return 1 00:16:45.997 02:18:45 -- common/autotest_common.sh@643 -- # es=1 00:16:45.997 02:18:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:45.997 02:18:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:45.997 02:18:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:45.997 02:18:45 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.997 02:18:45 -- common/autotest_common.sh@640 -- # local es=0 00:16:45.997 02:18:45 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.997 02:18:45 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:45.997 02:18:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.997 02:18:45 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:45.997 02:18:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.997 02:18:45 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:45.997 02:18:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.997 02:18:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:45.997 02:18:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:45.997 02:18:45 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:45.997 02:18:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.997 02:18:45 -- target/tls.sh@28 -- # bdevperf_pid=88005 00:16:45.997 02:18:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.997 02:18:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.997 02:18:45 -- target/tls.sh@31 -- # waitforlisten 88005 /var/tmp/bdevperf.sock 00:16:45.997 02:18:45 -- common/autotest_common.sh@819 -- # '[' -z 88005 ']' 00:16:45.997 02:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.997 02:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.997 02:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.997 02:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.997 02:18:45 -- common/autotest_common.sh@10 -- # set +x 00:16:45.997 [2024-07-15 02:18:45.459754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:45.997 [2024-07-15 02:18:45.459863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88005 ] 00:16:46.256 [2024-07-15 02:18:45.592871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.256 [2024-07-15 02:18:45.683137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.193 02:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:47.193 02:18:46 -- common/autotest_common.sh@852 -- # return 0 00:16:47.193 02:18:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.193 [2024-07-15 02:18:46.661285] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.193 [2024-07-15 02:18:46.667026] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:47.193 [2024-07-15 02:18:46.667093] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:47.193 [2024-07-15 02:18:46.667157] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:47.193 [2024-07-15 02:18:46.668011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cadb0 (107): Transport endpoint is not connected 00:16:47.193 [2024-07-15 02:18:46.669002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cadb0 (9): Bad file descriptor 00:16:47.193 [2024-07-15 02:18:46.669998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:47.193 [2024-07-15 02:18:46.670036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:47.193 [2024-07-15 02:18:46.670062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:47.193 2024/07/15 02:18:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:47.193 request: 00:16:47.193 { 00:16:47.193 "method": "bdev_nvme_attach_controller", 00:16:47.193 "params": { 00:16:47.193 "name": "TLSTEST", 00:16:47.193 "trtype": "tcp", 00:16:47.193 "traddr": "10.0.0.2", 00:16:47.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:47.193 "adrfam": "ipv4", 00:16:47.193 "trsvcid": "4420", 00:16:47.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.193 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:47.193 } 00:16:47.193 } 00:16:47.193 Got JSON-RPC error response 00:16:47.193 GoRPCClient: error on JSON-RPC call 00:16:47.193 02:18:46 -- target/tls.sh@36 -- # killprocess 88005 00:16:47.193 02:18:46 -- common/autotest_common.sh@926 -- # '[' -z 88005 ']' 00:16:47.193 02:18:46 -- common/autotest_common.sh@930 -- # kill -0 88005 00:16:47.193 02:18:46 -- common/autotest_common.sh@931 -- # uname 00:16:47.193 02:18:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:47.193 02:18:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88005 00:16:47.193 killing process with pid 88005 00:16:47.193 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.193 00:16:47.193 Latency(us) 00:16:47.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.193 =================================================================================================================== 00:16:47.193 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.193 02:18:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:47.193 02:18:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:47.193 02:18:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88005' 00:16:47.193 02:18:46 -- common/autotest_common.sh@945 -- # kill 88005 00:16:47.193 02:18:46 -- common/autotest_common.sh@950 -- # wait 88005 00:16:47.452 02:18:46 -- target/tls.sh@37 -- # return 1 00:16:47.452 02:18:46 -- common/autotest_common.sh@643 -- # es=1 00:16:47.452 02:18:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:47.452 02:18:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:47.452 02:18:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:47.452 02:18:46 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.452 02:18:46 -- common/autotest_common.sh@640 -- # local es=0 00:16:47.452 02:18:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.452 02:18:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:47.452 02:18:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.452 02:18:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:47.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.452 02:18:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.452 02:18:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:47.452 02:18:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.452 02:18:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:47.452 02:18:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:47.452 02:18:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:47.452 02:18:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.452 02:18:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.452 02:18:46 -- target/tls.sh@28 -- # bdevperf_pid=88050 00:16:47.452 02:18:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.452 02:18:46 -- target/tls.sh@31 -- # waitforlisten 88050 /var/tmp/bdevperf.sock 00:16:47.452 02:18:46 -- common/autotest_common.sh@819 -- # '[' -z 88050 ']' 00:16:47.452 02:18:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.452 02:18:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.452 02:18:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.452 02:18:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.452 02:18:46 -- common/autotest_common.sh@10 -- # set +x 00:16:47.452 [2024-07-15 02:18:46.952535] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:47.452 [2024-07-15 02:18:46.952829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88050 ] 00:16:47.711 [2024-07-15 02:18:47.084693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.711 [2024-07-15 02:18:47.159584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.646 02:18:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.646 02:18:47 -- common/autotest_common.sh@852 -- # return 0 00:16:48.646 02:18:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:48.646 [2024-07-15 02:18:48.062881] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.646 [2024-07-15 02:18:48.067912] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:48.646 [2024-07-15 02:18:48.067951] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:48.646 [2024-07-15 02:18:48.068015] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:48.646 [2024-07-15 02:18:48.068583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb39db0 (107): Transport endpoint is not connected 00:16:48.646 [2024-07-15 02:18:48.069556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb39db0 (9): Bad file descriptor 00:16:48.646 [2024-07-15 02:18:48.070551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:48.646 [2024-07-15 02:18:48.070587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:48.646 [2024-07-15 02:18:48.070604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:48.646 2024/07/15 02:18:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:48.646 request: 00:16:48.646 { 00:16:48.646 "method": "bdev_nvme_attach_controller", 00:16:48.646 "params": { 00:16:48.646 "name": "TLSTEST", 00:16:48.646 "trtype": "tcp", 00:16:48.646 "traddr": "10.0.0.2", 00:16:48.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.646 "adrfam": "ipv4", 00:16:48.646 "trsvcid": "4420", 00:16:48.646 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:48.646 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:16:48.646 } 00:16:48.646 } 00:16:48.646 Got JSON-RPC error response 00:16:48.646 GoRPCClient: error on JSON-RPC call 00:16:48.646 02:18:48 -- target/tls.sh@36 -- # killprocess 88050 00:16:48.646 02:18:48 -- common/autotest_common.sh@926 -- # '[' -z 88050 ']' 00:16:48.646 02:18:48 -- common/autotest_common.sh@930 -- # kill -0 88050 00:16:48.646 02:18:48 -- common/autotest_common.sh@931 -- # uname 00:16:48.646 02:18:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:48.646 02:18:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88050 00:16:48.646 killing process with pid 88050 00:16:48.646 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.646 00:16:48.646 Latency(us) 00:16:48.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.646 =================================================================================================================== 00:16:48.646 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.646 02:18:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:48.646 02:18:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:48.646 02:18:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88050' 00:16:48.646 02:18:48 -- common/autotest_common.sh@945 -- # kill 88050 00:16:48.646 02:18:48 -- common/autotest_common.sh@950 -- # wait 88050 00:16:48.902 02:18:48 -- target/tls.sh@37 -- # return 1 00:16:48.902 02:18:48 -- common/autotest_common.sh@643 -- # es=1 00:16:48.902 02:18:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:48.902 02:18:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:48.902 02:18:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:48.902 02:18:48 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.902 02:18:48 -- common/autotest_common.sh@640 -- # local es=0 00:16:48.902 02:18:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.902 02:18:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:48.902 02:18:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:48.902 02:18:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:48.902 02:18:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:48.902 02:18:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:48.902 02:18:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.902 02:18:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.902 02:18:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.902 02:18:48 -- target/tls.sh@23 -- # psk= 00:16:48.902 02:18:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.902 02:18:48 -- target/tls.sh@28 -- # bdevperf_pid=88096 00:16:48.902 02:18:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.902 02:18:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.902 02:18:48 -- target/tls.sh@31 -- # waitforlisten 88096 /var/tmp/bdevperf.sock 00:16:48.902 02:18:48 -- common/autotest_common.sh@819 -- # '[' -z 88096 ']' 00:16:48.902 02:18:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.902 02:18:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.902 02:18:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.902 02:18:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.902 02:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:48.902 [2024-07-15 02:18:48.354061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:48.902 [2024-07-15 02:18:48.354151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88096 ] 00:16:49.160 [2024-07-15 02:18:48.488439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.160 [2024-07-15 02:18:48.561439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.092 02:18:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.092 02:18:49 -- common/autotest_common.sh@852 -- # return 0 00:16:50.092 02:18:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:50.092 [2024-07-15 02:18:49.517513] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:50.092 [2024-07-15 02:18:49.518888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b44f0 (9): Bad file descriptor 00:16:50.092 [2024-07-15 02:18:49.519883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:50.092 [2024-07-15 02:18:49.519905] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:50.092 [2024-07-15 02:18:49.519917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:50.092 2024/07/15 02:18:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:50.092 request: 00:16:50.092 { 00:16:50.092 "method": "bdev_nvme_attach_controller", 00:16:50.092 "params": { 00:16:50.092 "name": "TLSTEST", 00:16:50.092 "trtype": "tcp", 00:16:50.092 "traddr": "10.0.0.2", 00:16:50.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.092 "adrfam": "ipv4", 00:16:50.092 "trsvcid": "4420", 00:16:50.092 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:50.092 } 00:16:50.092 } 00:16:50.092 Got JSON-RPC error response 00:16:50.092 GoRPCClient: error on JSON-RPC call 00:16:50.092 02:18:49 -- target/tls.sh@36 -- # killprocess 88096 00:16:50.092 02:18:49 -- common/autotest_common.sh@926 -- # '[' -z 88096 ']' 00:16:50.092 02:18:49 -- common/autotest_common.sh@930 -- # kill -0 88096 00:16:50.092 02:18:49 -- common/autotest_common.sh@931 -- # uname 00:16:50.092 02:18:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.092 02:18:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88096 00:16:50.092 02:18:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:50.092 02:18:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:50.092 02:18:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88096' 00:16:50.092 killing process with pid 88096 00:16:50.092 02:18:49 -- common/autotest_common.sh@945 -- # kill 88096 00:16:50.092 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.092 00:16:50.092 Latency(us) 00:16:50.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.092 =================================================================================================================== 00:16:50.092 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.092 02:18:49 -- common/autotest_common.sh@950 -- # wait 88096 00:16:50.350 02:18:49 -- target/tls.sh@37 -- # return 1 00:16:50.350 02:18:49 -- common/autotest_common.sh@643 -- # es=1 00:16:50.350 02:18:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:50.350 02:18:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:50.350 02:18:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:50.350 02:18:49 -- target/tls.sh@167 -- # killprocess 87439 00:16:50.350 02:18:49 -- common/autotest_common.sh@926 -- # '[' -z 87439 ']' 00:16:50.350 02:18:49 -- common/autotest_common.sh@930 -- # kill -0 87439 00:16:50.350 02:18:49 -- common/autotest_common.sh@931 -- # uname 00:16:50.350 02:18:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.350 02:18:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87439 00:16:50.350 02:18:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:50.350 02:18:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:50.350 killing process with pid 87439 00:16:50.350 02:18:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87439' 00:16:50.350 02:18:49 -- common/autotest_common.sh@945 -- # kill 87439 00:16:50.350 02:18:49 -- common/autotest_common.sh@950 -- # wait 87439 00:16:50.607 02:18:49 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:50.607 02:18:49 -- target/tls.sh@49 -- # local key hash crc 00:16:50.607 02:18:49 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:50.607 02:18:49 -- target/tls.sh@51 -- # hash=02 00:16:50.607 02:18:49 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:50.607 02:18:49 -- target/tls.sh@52 -- # tail -c8 00:16:50.607 02:18:49 -- target/tls.sh@52 -- # gzip -1 -c 00:16:50.607 02:18:49 -- target/tls.sh@52 -- # head -c 4 00:16:50.607 02:18:49 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:50.607 02:18:49 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:50.607 02:18:49 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:50.607 02:18:49 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.607 02:18:50 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.607 02:18:50 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:50.607 02:18:50 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:50.607 02:18:50 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:50.607 02:18:50 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:50.607 02:18:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.607 02:18:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:50.607 02:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:50.607 02:18:50 -- nvmf/common.sh@469 -- # nvmfpid=88158 00:16:50.607 02:18:50 -- nvmf/common.sh@470 -- # waitforlisten 88158 00:16:50.607 02:18:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:50.607 02:18:50 -- common/autotest_common.sh@819 -- # '[' -z 88158 ']' 00:16:50.607 02:18:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.607 02:18:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.607 02:18:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.607 02:18:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.607 02:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:50.607 [2024-07-15 02:18:50.059356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:50.607 [2024-07-15 02:18:50.059463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.864 [2024-07-15 02:18:50.194068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.864 [2024-07-15 02:18:50.257757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.864 [2024-07-15 02:18:50.257910] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.864 [2024-07-15 02:18:50.257924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.864 [2024-07-15 02:18:50.257932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.864 [2024-07-15 02:18:50.257957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.796 02:18:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.796 02:18:51 -- common/autotest_common.sh@852 -- # return 0 00:16:51.796 02:18:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.796 02:18:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:51.796 02:18:51 -- common/autotest_common.sh@10 -- # set +x 00:16:51.796 02:18:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.796 02:18:51 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.796 02:18:51 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:51.796 02:18:51 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:51.796 [2024-07-15 02:18:51.267664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.796 02:18:51 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.054 02:18:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:52.345 [2024-07-15 02:18:51.731818] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.345 [2024-07-15 02:18:51.732092] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.345 02:18:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:52.602 malloc0 00:16:52.602 02:18:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:52.859 02:18:52 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:53.117 02:18:52 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:53.117 02:18:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.117 02:18:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:53.117 02:18:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.117 02:18:52 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:16:53.117 02:18:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.117 02:18:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.117 02:18:52 -- target/tls.sh@28 -- # bdevperf_pid=88261 00:16:53.117 02:18:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.117 02:18:52 -- target/tls.sh@31 -- # waitforlisten 88261 /var/tmp/bdevperf.sock 00:16:53.117 02:18:52 -- common/autotest_common.sh@819 -- # '[' -z 88261 ']' 00:16:53.117 02:18:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.117 02:18:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.117 02:18:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.117 02:18:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.117 02:18:52 -- common/autotest_common.sh@10 -- # set +x 00:16:53.117 [2024-07-15 02:18:52.479458] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:53.117 [2024-07-15 02:18:52.479541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88261 ] 00:16:53.117 [2024-07-15 02:18:52.616422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.374 [2024-07-15 02:18:52.686329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.939 02:18:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.939 02:18:53 -- common/autotest_common.sh@852 -- # return 0 00:16:53.939 02:18:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:16:54.197 [2024-07-15 02:18:53.709207] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.454 TLSTESTn1 00:16:54.454 02:18:53 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:54.454 Running I/O for 10 seconds... 00:17:04.423 00:17:04.423 Latency(us) 00:17:04.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.423 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:04.423 Verification LBA range: start 0x0 length 0x2000 00:17:04.424 TLSTESTn1 : 10.01 5860.19 22.89 0.00 0.00 21807.85 4379.00 25618.62 00:17:04.424 =================================================================================================================== 00:17:04.424 Total : 5860.19 22.89 0.00 0.00 21807.85 4379.00 25618.62 00:17:04.424 0 00:17:04.424 02:19:03 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:04.424 02:19:03 -- target/tls.sh@45 -- # killprocess 88261 00:17:04.424 02:19:03 -- common/autotest_common.sh@926 -- # '[' -z 88261 ']' 00:17:04.424 02:19:03 -- common/autotest_common.sh@930 -- # kill -0 88261 00:17:04.424 02:19:03 -- common/autotest_common.sh@931 -- # uname 00:17:04.424 02:19:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.424 02:19:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88261 00:17:04.424 02:19:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:04.424 02:19:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:04.424 killing process with pid 88261 00:17:04.424 02:19:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88261' 00:17:04.424 02:19:03 -- common/autotest_common.sh@945 -- # kill 88261 00:17:04.424 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.424 00:17:04.424 Latency(us) 00:17:04.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.424 =================================================================================================================== 00:17:04.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:04.424 02:19:03 -- common/autotest_common.sh@950 -- # wait 88261 00:17:04.682 02:19:04 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.682 02:19:04 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.682 02:19:04 -- common/autotest_common.sh@640 -- # local es=0 00:17:04.682 02:19:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.682 02:19:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:04.682 02:19:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:04.682 02:19:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:04.682 02:19:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:04.682 02:19:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:04.682 02:19:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.682 02:19:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:04.682 02:19:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.682 02:19:04 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:04.682 02:19:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.682 02:19:04 -- target/tls.sh@28 -- # bdevperf_pid=88409 00:17:04.682 02:19:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.682 02:19:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.682 02:19:04 -- target/tls.sh@31 -- # waitforlisten 88409 /var/tmp/bdevperf.sock 00:17:04.682 02:19:04 -- common/autotest_common.sh@819 -- # '[' -z 88409 ']' 00:17:04.682 02:19:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.682 02:19:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.682 02:19:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.682 02:19:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.682 02:19:04 -- common/autotest_common.sh@10 -- # set +x 00:17:04.682 [2024-07-15 02:19:04.219710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:04.682 [2024-07-15 02:19:04.219810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88409 ] 00:17:04.940 [2024-07-15 02:19:04.354527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.940 [2024-07-15 02:19:04.432694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.872 02:19:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:05.872 02:19:05 -- common/autotest_common.sh@852 -- # return 0 00:17:05.872 02:19:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.872 [2024-07-15 02:19:05.371295] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:05.872 [2024-07-15 02:19:05.371355] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:05.872 2024/07/15 02:19:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:05.872 request: 00:17:05.872 { 00:17:05.872 "method": "bdev_nvme_attach_controller", 00:17:05.872 "params": { 00:17:05.872 "name": "TLSTEST", 00:17:05.872 "trtype": "tcp", 00:17:05.872 "traddr": "10.0.0.2", 00:17:05.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.872 "adrfam": "ipv4", 00:17:05.872 "trsvcid": "4420", 00:17:05.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.872 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:05.872 } 00:17:05.872 } 00:17:05.872 Got JSON-RPC error response 00:17:05.872 GoRPCClient: error on JSON-RPC call 00:17:05.872 02:19:05 -- target/tls.sh@36 -- # killprocess 88409 00:17:05.872 02:19:05 -- common/autotest_common.sh@926 -- # '[' -z 88409 ']' 00:17:05.872 02:19:05 -- common/autotest_common.sh@930 -- # kill -0 88409 00:17:05.872 02:19:05 -- common/autotest_common.sh@931 -- # uname 00:17:05.872 02:19:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.872 02:19:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88409 00:17:05.872 02:19:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:05.872 02:19:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:05.872 killing process with pid 88409 00:17:05.872 02:19:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88409' 00:17:05.872 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.873 00:17:05.873 Latency(us) 00:17:05.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.873 =================================================================================================================== 00:17:05.873 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.873 02:19:05 -- common/autotest_common.sh@945 -- # kill 88409 00:17:05.873 02:19:05 -- common/autotest_common.sh@950 -- # wait 88409 00:17:06.131 02:19:05 -- target/tls.sh@37 -- # return 1 00:17:06.131 02:19:05 -- common/autotest_common.sh@643 -- # es=1 00:17:06.131 02:19:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.131 02:19:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.131 02:19:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.131 02:19:05 -- target/tls.sh@183 -- # killprocess 88158 00:17:06.131 02:19:05 -- common/autotest_common.sh@926 -- # '[' -z 88158 ']' 00:17:06.131 02:19:05 -- common/autotest_common.sh@930 -- # kill -0 88158 00:17:06.131 02:19:05 -- common/autotest_common.sh@931 -- # uname 00:17:06.131 02:19:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.131 02:19:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88158 00:17:06.131 02:19:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:06.131 02:19:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:06.131 killing process with pid 88158 00:17:06.131 02:19:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88158' 00:17:06.131 02:19:05 -- common/autotest_common.sh@945 -- # kill 88158 00:17:06.131 02:19:05 -- common/autotest_common.sh@950 -- # wait 88158 00:17:06.389 02:19:05 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:06.389 02:19:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:06.389 02:19:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:06.389 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:17:06.389 02:19:05 -- nvmf/common.sh@469 -- # nvmfpid=88465 00:17:06.389 02:19:05 -- nvmf/common.sh@470 -- # waitforlisten 88465 00:17:06.389 02:19:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.389 02:19:05 -- common/autotest_common.sh@819 -- # '[' -z 88465 ']' 00:17:06.389 02:19:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.389 02:19:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.389 02:19:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.389 02:19:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.389 02:19:05 -- common/autotest_common.sh@10 -- # set +x 00:17:06.389 [2024-07-15 02:19:05.904042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:06.389 [2024-07-15 02:19:05.904129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.648 [2024-07-15 02:19:06.037111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.648 [2024-07-15 02:19:06.108890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:06.648 [2024-07-15 02:19:06.109038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.648 [2024-07-15 02:19:06.109078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.648 [2024-07-15 02:19:06.109088] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.648 [2024-07-15 02:19:06.109114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.585 02:19:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.585 02:19:06 -- common/autotest_common.sh@852 -- # return 0 00:17:07.585 02:19:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.585 02:19:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:07.585 02:19:06 -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 02:19:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.585 02:19:06 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.585 02:19:06 -- common/autotest_common.sh@640 -- # local es=0 00:17:07.585 02:19:06 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.585 02:19:06 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:07.585 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.585 02:19:06 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:07.585 02:19:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.585 02:19:06 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.585 02:19:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.585 02:19:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.585 [2024-07-15 02:19:07.107434] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.585 02:19:07 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:07.844 02:19:07 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:08.103 [2024-07-15 02:19:07.567543] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.103 [2024-07-15 02:19:07.567790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.103 02:19:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:08.361 malloc0 00:17:08.361 02:19:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:08.621 02:19:08 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:08.880 [2024-07-15 02:19:08.283255] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:08.880 [2024-07-15 02:19:08.283300] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:08.880 [2024-07-15 02:19:08.283336] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:08.880 2024/07/15 02:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:08.880 request: 00:17:08.880 { 00:17:08.880 "method": "nvmf_subsystem_add_host", 00:17:08.880 "params": { 00:17:08.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.880 "host": "nqn.2016-06.io.spdk:host1", 00:17:08.880 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:08.880 } 00:17:08.880 } 00:17:08.880 Got JSON-RPC error response 00:17:08.880 GoRPCClient: error on JSON-RPC call 00:17:08.880 02:19:08 -- common/autotest_common.sh@643 -- # es=1 00:17:08.880 02:19:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:08.880 02:19:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:08.880 02:19:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:08.880 02:19:08 -- target/tls.sh@189 -- # killprocess 88465 00:17:08.880 02:19:08 -- common/autotest_common.sh@926 -- # '[' -z 88465 ']' 00:17:08.880 02:19:08 -- common/autotest_common.sh@930 -- # kill -0 88465 00:17:08.880 02:19:08 -- common/autotest_common.sh@931 -- # uname 00:17:08.880 02:19:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:08.880 02:19:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88465 00:17:08.880 02:19:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:08.880 02:19:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:08.880 killing process with pid 88465 00:17:08.880 02:19:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88465' 00:17:08.880 02:19:08 -- common/autotest_common.sh@945 -- # kill 88465 00:17:08.880 02:19:08 -- common/autotest_common.sh@950 -- # wait 88465 00:17:09.138 02:19:08 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:09.138 02:19:08 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:09.138 02:19:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:09.138 02:19:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:09.138 02:19:08 -- common/autotest_common.sh@10 -- # set +x 00:17:09.138 02:19:08 -- nvmf/common.sh@469 -- # nvmfpid=88570 00:17:09.139 02:19:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:09.139 02:19:08 -- nvmf/common.sh@470 -- # waitforlisten 88570 00:17:09.139 02:19:08 -- common/autotest_common.sh@819 -- # '[' -z 88570 ']' 00:17:09.139 02:19:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.139 02:19:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.139 02:19:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.139 02:19:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.139 02:19:08 -- common/autotest_common.sh@10 -- # set +x 00:17:09.139 [2024-07-15 02:19:08.588285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:09.139 [2024-07-15 02:19:08.588387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.397 [2024-07-15 02:19:08.722014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.397 [2024-07-15 02:19:08.797504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.397 [2024-07-15 02:19:08.797684] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.397 [2024-07-15 02:19:08.797698] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.397 [2024-07-15 02:19:08.797707] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.397 [2024-07-15 02:19:08.797732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.334 02:19:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.334 02:19:09 -- common/autotest_common.sh@852 -- # return 0 00:17:10.334 02:19:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:10.334 02:19:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:10.334 02:19:09 -- common/autotest_common.sh@10 -- # set +x 00:17:10.334 02:19:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.334 02:19:09 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:10.334 02:19:09 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:10.334 02:19:09 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:10.334 [2024-07-15 02:19:09.802228] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.334 02:19:09 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:10.592 02:19:10 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:10.850 [2024-07-15 02:19:10.270377] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:10.850 [2024-07-15 02:19:10.270577] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.850 02:19:10 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:11.108 malloc0 00:17:11.108 02:19:10 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:11.365 02:19:10 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:11.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.639 02:19:10 -- target/tls.sh@197 -- # bdevperf_pid=88673 00:17:11.639 02:19:10 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.639 02:19:10 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.639 02:19:10 -- target/tls.sh@200 -- # waitforlisten 88673 /var/tmp/bdevperf.sock 00:17:11.639 02:19:10 -- common/autotest_common.sh@819 -- # '[' -z 88673 ']' 00:17:11.639 02:19:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.639 02:19:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:11.639 02:19:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.639 02:19:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:11.639 02:19:10 -- common/autotest_common.sh@10 -- # set +x 00:17:11.639 [2024-07-15 02:19:11.025975] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:11.639 [2024-07-15 02:19:11.026572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88673 ] 00:17:11.639 [2024-07-15 02:19:11.161930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.909 [2024-07-15 02:19:11.242267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.474 02:19:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:12.474 02:19:11 -- common/autotest_common.sh@852 -- # return 0 00:17:12.474 02:19:11 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:12.731 [2024-07-15 02:19:12.140085] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.731 TLSTESTn1 00:17:12.731 02:19:12 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:13.297 02:19:12 -- target/tls.sh@205 -- # tgtconf='{ 00:17:13.297 "subsystems": [ 00:17:13.297 { 00:17:13.297 "subsystem": "iobuf", 00:17:13.297 "config": [ 00:17:13.297 { 00:17:13.297 "method": "iobuf_set_options", 00:17:13.297 "params": { 00:17:13.297 "large_bufsize": 135168, 00:17:13.297 "large_pool_count": 1024, 00:17:13.297 "small_bufsize": 8192, 00:17:13.298 "small_pool_count": 8192 00:17:13.298 } 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "sock", 00:17:13.298 "config": [ 00:17:13.298 { 00:17:13.298 "method": "sock_impl_set_options", 00:17:13.298 "params": { 00:17:13.298 "enable_ktls": false, 00:17:13.298 "enable_placement_id": 0, 00:17:13.298 "enable_quickack": false, 00:17:13.298 "enable_recv_pipe": true, 00:17:13.298 "enable_zerocopy_send_client": false, 00:17:13.298 "enable_zerocopy_send_server": true, 00:17:13.298 "impl_name": "posix", 00:17:13.298 "recv_buf_size": 2097152, 00:17:13.298 "send_buf_size": 2097152, 00:17:13.298 "tls_version": 0, 00:17:13.298 "zerocopy_threshold": 0 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "sock_impl_set_options", 00:17:13.298 "params": { 00:17:13.298 "enable_ktls": false, 00:17:13.298 "enable_placement_id": 0, 00:17:13.298 "enable_quickack": false, 00:17:13.298 "enable_recv_pipe": true, 00:17:13.298 "enable_zerocopy_send_client": false, 00:17:13.298 "enable_zerocopy_send_server": true, 00:17:13.298 "impl_name": "ssl", 00:17:13.298 "recv_buf_size": 4096, 00:17:13.298 "send_buf_size": 4096, 00:17:13.298 "tls_version": 0, 00:17:13.298 "zerocopy_threshold": 0 00:17:13.298 } 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "vmd", 00:17:13.298 "config": [] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "accel", 00:17:13.298 "config": [ 00:17:13.298 { 00:17:13.298 "method": "accel_set_options", 00:17:13.298 "params": { 00:17:13.298 "buf_count": 2048, 00:17:13.298 "large_cache_size": 16, 00:17:13.298 "sequence_count": 2048, 00:17:13.298 "small_cache_size": 128, 00:17:13.298 "task_count": 2048 00:17:13.298 } 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "bdev", 00:17:13.298 "config": [ 00:17:13.298 { 00:17:13.298 "method": "bdev_set_options", 00:17:13.298 "params": { 00:17:13.298 "bdev_auto_examine": true, 00:17:13.298 "bdev_io_cache_size": 256, 00:17:13.298 "bdev_io_pool_size": 65535, 00:17:13.298 "iobuf_large_cache_size": 16, 00:17:13.298 "iobuf_small_cache_size": 128 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_raid_set_options", 00:17:13.298 "params": { 00:17:13.298 "process_window_size_kb": 1024 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_iscsi_set_options", 00:17:13.298 "params": { 00:17:13.298 "timeout_sec": 30 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_nvme_set_options", 00:17:13.298 "params": { 00:17:13.298 "action_on_timeout": "none", 00:17:13.298 "allow_accel_sequence": false, 00:17:13.298 "arbitration_burst": 0, 00:17:13.298 "bdev_retry_count": 3, 00:17:13.298 "ctrlr_loss_timeout_sec": 0, 00:17:13.298 "delay_cmd_submit": true, 00:17:13.298 "fast_io_fail_timeout_sec": 0, 00:17:13.298 "generate_uuids": false, 00:17:13.298 "high_priority_weight": 0, 00:17:13.298 "io_path_stat": false, 00:17:13.298 "io_queue_requests": 0, 00:17:13.298 "keep_alive_timeout_ms": 10000, 00:17:13.298 "low_priority_weight": 0, 00:17:13.298 "medium_priority_weight": 0, 00:17:13.298 "nvme_adminq_poll_period_us": 10000, 00:17:13.298 "nvme_ioq_poll_period_us": 0, 00:17:13.298 "reconnect_delay_sec": 0, 00:17:13.298 "timeout_admin_us": 0, 00:17:13.298 "timeout_us": 0, 00:17:13.298 "transport_ack_timeout": 0, 00:17:13.298 "transport_retry_count": 4, 00:17:13.298 "transport_tos": 0 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_nvme_set_hotplug", 00:17:13.298 "params": { 00:17:13.298 "enable": false, 00:17:13.298 "period_us": 100000 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_malloc_create", 00:17:13.298 "params": { 00:17:13.298 "block_size": 4096, 00:17:13.298 "name": "malloc0", 00:17:13.298 "num_blocks": 8192, 00:17:13.298 "optimal_io_boundary": 0, 00:17:13.298 "physical_block_size": 4096, 00:17:13.298 "uuid": "f2c45a72-6f16-452c-9d64-72d923e7d3e5" 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "bdev_wait_for_examine" 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "nbd", 00:17:13.298 "config": [] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "scheduler", 00:17:13.298 "config": [ 00:17:13.298 { 00:17:13.298 "method": "framework_set_scheduler", 00:17:13.298 "params": { 00:17:13.298 "name": "static" 00:17:13.298 } 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "subsystem": "nvmf", 00:17:13.298 "config": [ 00:17:13.298 { 00:17:13.298 "method": "nvmf_set_config", 00:17:13.298 "params": { 00:17:13.298 "admin_cmd_passthru": { 00:17:13.298 "identify_ctrlr": false 00:17:13.298 }, 00:17:13.298 "discovery_filter": "match_any" 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_set_max_subsystems", 00:17:13.298 "params": { 00:17:13.298 "max_subsystems": 1024 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_set_crdt", 00:17:13.298 "params": { 00:17:13.298 "crdt1": 0, 00:17:13.298 "crdt2": 0, 00:17:13.298 "crdt3": 0 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_create_transport", 00:17:13.298 "params": { 00:17:13.298 "abort_timeout_sec": 1, 00:17:13.298 "buf_cache_size": 4294967295, 00:17:13.298 "c2h_success": false, 00:17:13.298 "dif_insert_or_strip": false, 00:17:13.298 "in_capsule_data_size": 4096, 00:17:13.298 "io_unit_size": 131072, 00:17:13.298 "max_aq_depth": 128, 00:17:13.298 "max_io_qpairs_per_ctrlr": 127, 00:17:13.298 "max_io_size": 131072, 00:17:13.298 "max_queue_depth": 128, 00:17:13.298 "num_shared_buffers": 511, 00:17:13.298 "sock_priority": 0, 00:17:13.298 "trtype": "TCP", 00:17:13.298 "zcopy": false 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_create_subsystem", 00:17:13.298 "params": { 00:17:13.298 "allow_any_host": false, 00:17:13.298 "ana_reporting": false, 00:17:13.298 "max_cntlid": 65519, 00:17:13.298 "max_namespaces": 10, 00:17:13.298 "min_cntlid": 1, 00:17:13.298 "model_number": "SPDK bdev Controller", 00:17:13.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.298 "serial_number": "SPDK00000000000001" 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_subsystem_add_host", 00:17:13.298 "params": { 00:17:13.298 "host": "nqn.2016-06.io.spdk:host1", 00:17:13.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.298 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_subsystem_add_ns", 00:17:13.298 "params": { 00:17:13.298 "namespace": { 00:17:13.298 "bdev_name": "malloc0", 00:17:13.298 "nguid": "F2C45A726F16452C9D6472D923E7D3E5", 00:17:13.298 "nsid": 1, 00:17:13.298 "uuid": "f2c45a72-6f16-452c-9d64-72d923e7d3e5" 00:17:13.298 }, 00:17:13.298 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:13.298 } 00:17:13.298 }, 00:17:13.298 { 00:17:13.298 "method": "nvmf_subsystem_add_listener", 00:17:13.298 "params": { 00:17:13.298 "listen_address": { 00:17:13.298 "adrfam": "IPv4", 00:17:13.298 "traddr": "10.0.0.2", 00:17:13.298 "trsvcid": "4420", 00:17:13.298 "trtype": "TCP" 00:17:13.298 }, 00:17:13.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.298 "secure_channel": true 00:17:13.298 } 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 } 00:17:13.298 ] 00:17:13.298 }' 00:17:13.298 02:19:12 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:13.558 02:19:12 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:13.558 "subsystems": [ 00:17:13.558 { 00:17:13.558 "subsystem": "iobuf", 00:17:13.558 "config": [ 00:17:13.558 { 00:17:13.558 "method": "iobuf_set_options", 00:17:13.558 "params": { 00:17:13.558 "large_bufsize": 135168, 00:17:13.558 "large_pool_count": 1024, 00:17:13.558 "small_bufsize": 8192, 00:17:13.558 "small_pool_count": 8192 00:17:13.558 } 00:17:13.558 } 00:17:13.558 ] 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "subsystem": "sock", 00:17:13.558 "config": [ 00:17:13.558 { 00:17:13.558 "method": "sock_impl_set_options", 00:17:13.558 "params": { 00:17:13.558 "enable_ktls": false, 00:17:13.558 "enable_placement_id": 0, 00:17:13.558 "enable_quickack": false, 00:17:13.558 "enable_recv_pipe": true, 00:17:13.558 "enable_zerocopy_send_client": false, 00:17:13.558 "enable_zerocopy_send_server": true, 00:17:13.558 "impl_name": "posix", 00:17:13.558 "recv_buf_size": 2097152, 00:17:13.558 "send_buf_size": 2097152, 00:17:13.558 "tls_version": 0, 00:17:13.558 "zerocopy_threshold": 0 00:17:13.558 } 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "method": "sock_impl_set_options", 00:17:13.558 "params": { 00:17:13.558 "enable_ktls": false, 00:17:13.558 "enable_placement_id": 0, 00:17:13.558 "enable_quickack": false, 00:17:13.558 "enable_recv_pipe": true, 00:17:13.558 "enable_zerocopy_send_client": false, 00:17:13.558 "enable_zerocopy_send_server": true, 00:17:13.558 "impl_name": "ssl", 00:17:13.558 "recv_buf_size": 4096, 00:17:13.558 "send_buf_size": 4096, 00:17:13.558 "tls_version": 0, 00:17:13.558 "zerocopy_threshold": 0 00:17:13.558 } 00:17:13.558 } 00:17:13.558 ] 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "subsystem": "vmd", 00:17:13.558 "config": [] 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "subsystem": "accel", 00:17:13.558 "config": [ 00:17:13.558 { 00:17:13.558 "method": "accel_set_options", 00:17:13.558 "params": { 00:17:13.558 "buf_count": 2048, 00:17:13.558 "large_cache_size": 16, 00:17:13.558 "sequence_count": 2048, 00:17:13.558 "small_cache_size": 128, 00:17:13.558 "task_count": 2048 00:17:13.558 } 00:17:13.558 } 00:17:13.558 ] 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "subsystem": "bdev", 00:17:13.558 "config": [ 00:17:13.558 { 00:17:13.558 "method": "bdev_set_options", 00:17:13.558 "params": { 00:17:13.558 "bdev_auto_examine": true, 00:17:13.558 "bdev_io_cache_size": 256, 00:17:13.558 "bdev_io_pool_size": 65535, 00:17:13.558 "iobuf_large_cache_size": 16, 00:17:13.558 "iobuf_small_cache_size": 128 00:17:13.558 } 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "method": "bdev_raid_set_options", 00:17:13.558 "params": { 00:17:13.558 "process_window_size_kb": 1024 00:17:13.558 } 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "method": "bdev_iscsi_set_options", 00:17:13.558 "params": { 00:17:13.558 "timeout_sec": 30 00:17:13.558 } 00:17:13.558 }, 00:17:13.558 { 00:17:13.558 "method": "bdev_nvme_set_options", 00:17:13.558 "params": { 00:17:13.558 "action_on_timeout": "none", 00:17:13.558 "allow_accel_sequence": false, 00:17:13.558 "arbitration_burst": 0, 00:17:13.558 "bdev_retry_count": 3, 00:17:13.558 "ctrlr_loss_timeout_sec": 0, 00:17:13.558 "delay_cmd_submit": true, 00:17:13.558 "fast_io_fail_timeout_sec": 0, 00:17:13.558 "generate_uuids": false, 00:17:13.558 "high_priority_weight": 0, 00:17:13.558 "io_path_stat": false, 00:17:13.558 "io_queue_requests": 512, 00:17:13.558 "keep_alive_timeout_ms": 10000, 00:17:13.558 "low_priority_weight": 0, 00:17:13.558 "medium_priority_weight": 0, 00:17:13.558 "nvme_adminq_poll_period_us": 10000, 00:17:13.558 "nvme_ioq_poll_period_us": 0, 00:17:13.558 "reconnect_delay_sec": 0, 00:17:13.558 "timeout_admin_us": 0, 00:17:13.558 "timeout_us": 0, 00:17:13.558 "transport_ack_timeout": 0, 00:17:13.558 "transport_retry_count": 4, 00:17:13.558 "transport_tos": 0 00:17:13.558 } 00:17:13.558 }, 00:17:13.558 { 00:17:13.559 "method": "bdev_nvme_attach_controller", 00:17:13.559 "params": { 00:17:13.559 "adrfam": "IPv4", 00:17:13.559 "ctrlr_loss_timeout_sec": 0, 00:17:13.559 "ddgst": false, 00:17:13.559 "fast_io_fail_timeout_sec": 0, 00:17:13.559 "hdgst": false, 00:17:13.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.559 "name": "TLSTEST", 00:17:13.559 "prchk_guard": false, 00:17:13.559 "prchk_reftag": false, 00:17:13.559 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:13.559 "reconnect_delay_sec": 0, 00:17:13.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.559 "traddr": "10.0.0.2", 00:17:13.559 "trsvcid": "4420", 00:17:13.559 "trtype": "TCP" 00:17:13.559 } 00:17:13.559 }, 00:17:13.559 { 00:17:13.559 "method": "bdev_nvme_set_hotplug", 00:17:13.559 "params": { 00:17:13.559 "enable": false, 00:17:13.559 "period_us": 100000 00:17:13.559 } 00:17:13.559 }, 00:17:13.559 { 00:17:13.559 "method": "bdev_wait_for_examine" 00:17:13.559 } 00:17:13.559 ] 00:17:13.559 }, 00:17:13.559 { 00:17:13.559 "subsystem": "nbd", 00:17:13.559 "config": [] 00:17:13.559 } 00:17:13.559 ] 00:17:13.559 }' 00:17:13.559 02:19:12 -- target/tls.sh@208 -- # killprocess 88673 00:17:13.559 02:19:12 -- common/autotest_common.sh@926 -- # '[' -z 88673 ']' 00:17:13.559 02:19:12 -- common/autotest_common.sh@930 -- # kill -0 88673 00:17:13.559 02:19:12 -- common/autotest_common.sh@931 -- # uname 00:17:13.559 02:19:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:13.559 02:19:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88673 00:17:13.559 killing process with pid 88673 00:17:13.559 Received shutdown signal, test time was about 10.000000 seconds 00:17:13.559 00:17:13.559 Latency(us) 00:17:13.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.559 =================================================================================================================== 00:17:13.559 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:13.559 02:19:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:13.559 02:19:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:13.559 02:19:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88673' 00:17:13.559 02:19:12 -- common/autotest_common.sh@945 -- # kill 88673 00:17:13.559 02:19:12 -- common/autotest_common.sh@950 -- # wait 88673 00:17:13.828 02:19:13 -- target/tls.sh@209 -- # killprocess 88570 00:17:13.828 02:19:13 -- common/autotest_common.sh@926 -- # '[' -z 88570 ']' 00:17:13.828 02:19:13 -- common/autotest_common.sh@930 -- # kill -0 88570 00:17:13.828 02:19:13 -- common/autotest_common.sh@931 -- # uname 00:17:13.828 02:19:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:13.828 02:19:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88570 00:17:13.828 killing process with pid 88570 00:17:13.828 02:19:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:13.828 02:19:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:13.828 02:19:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88570' 00:17:13.828 02:19:13 -- common/autotest_common.sh@945 -- # kill 88570 00:17:13.828 02:19:13 -- common/autotest_common.sh@950 -- # wait 88570 00:17:13.828 02:19:13 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:13.828 02:19:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:13.828 02:19:13 -- target/tls.sh@212 -- # echo '{ 00:17:13.828 "subsystems": [ 00:17:13.828 { 00:17:13.828 "subsystem": "iobuf", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "iobuf_set_options", 00:17:13.828 "params": { 00:17:13.828 "large_bufsize": 135168, 00:17:13.828 "large_pool_count": 1024, 00:17:13.828 "small_bufsize": 8192, 00:17:13.828 "small_pool_count": 8192 00:17:13.828 } 00:17:13.828 } 00:17:13.828 ] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "sock", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "sock_impl_set_options", 00:17:13.828 "params": { 00:17:13.828 "enable_ktls": false, 00:17:13.828 "enable_placement_id": 0, 00:17:13.828 "enable_quickack": false, 00:17:13.828 "enable_recv_pipe": true, 00:17:13.828 "enable_zerocopy_send_client": false, 00:17:13.828 "enable_zerocopy_send_server": true, 00:17:13.828 "impl_name": "posix", 00:17:13.828 "recv_buf_size": 2097152, 00:17:13.828 "send_buf_size": 2097152, 00:17:13.828 "tls_version": 0, 00:17:13.828 "zerocopy_threshold": 0 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "sock_impl_set_options", 00:17:13.828 "params": { 00:17:13.828 "enable_ktls": false, 00:17:13.828 "enable_placement_id": 0, 00:17:13.828 "enable_quickack": false, 00:17:13.828 "enable_recv_pipe": true, 00:17:13.828 "enable_zerocopy_send_client": false, 00:17:13.828 "enable_zerocopy_send_server": true, 00:17:13.828 "impl_name": "ssl", 00:17:13.828 "recv_buf_size": 4096, 00:17:13.828 "send_buf_size": 4096, 00:17:13.828 "tls_version": 0, 00:17:13.828 "zerocopy_threshold": 0 00:17:13.828 } 00:17:13.828 } 00:17:13.828 ] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "vmd", 00:17:13.828 "config": [] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "accel", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "accel_set_options", 00:17:13.828 "params": { 00:17:13.828 "buf_count": 2048, 00:17:13.828 "large_cache_size": 16, 00:17:13.828 "sequence_count": 2048, 00:17:13.828 "small_cache_size": 128, 00:17:13.828 "task_count": 2048 00:17:13.828 } 00:17:13.828 } 00:17:13.828 ] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "bdev", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "bdev_set_options", 00:17:13.828 "params": { 00:17:13.828 "bdev_auto_examine": true, 00:17:13.828 "bdev_io_cache_size": 256, 00:17:13.828 "bdev_io_pool_size": 65535, 00:17:13.828 "iobuf_large_cache_size": 16, 00:17:13.828 "iobuf_small_cache_size": 128 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_raid_set_options", 00:17:13.828 "params": { 00:17:13.828 "process_window_size_kb": 1024 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_iscsi_set_options", 00:17:13.828 "params": { 00:17:13.828 "timeout_sec": 30 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_nvme_set_options", 00:17:13.828 "params": { 00:17:13.828 "action_on_timeout": "none", 00:17:13.828 "allow_accel_sequence": false, 00:17:13.828 "arbitration_burst": 0, 00:17:13.828 "bdev_retry_count": 3, 00:17:13.828 "ctrlr_loss_timeout_sec": 0, 00:17:13.828 "delay_cmd_submit": true, 00:17:13.828 "fast_io_fail_timeout_sec": 0, 00:17:13.828 "generate_uuids": false, 00:17:13.828 "high_priority_weight": 0, 00:17:13.828 "io_path_stat": false, 00:17:13.828 "io_queue_requests": 0, 00:17:13.828 "keep_alive_timeout_ms": 10000, 00:17:13.828 "low_priority_weight": 0, 00:17:13.828 "medium_priority_weight": 0, 00:17:13.828 "nvme_adminq_poll_period_us": 10000, 00:17:13.828 "nvme_ioq_poll_period_us": 0, 00:17:13.828 "reconnect_delay_sec": 0, 00:17:13.828 "timeout_admin_us": 0, 00:17:13.828 "timeout_us": 0, 00:17:13.828 "transport_ack_timeout": 0, 00:17:13.828 "transport_retry_count": 4, 00:17:13.828 "transport_tos": 0 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_nvme_set_hotplug", 00:17:13.828 "params": { 00:17:13.828 "enable": false, 00:17:13.828 "period_us": 100000 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_malloc_create", 00:17:13.828 "params": { 00:17:13.828 "block_size": 4096, 00:17:13.828 "name": "malloc0", 00:17:13.828 "num_blocks": 8192, 00:17:13.828 "optimal_io_boundary": 0, 00:17:13.828 "physical_block_size": 4096, 00:17:13.828 "uuid": "f2c45a72-6f16-452c-9d64-72d923e7d3e5" 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "bdev_wait_for_examine" 00:17:13.828 } 00:17:13.828 ] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "nbd", 00:17:13.828 "config": [] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "scheduler", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "framework_set_scheduler", 00:17:13.828 "params": { 00:17:13.828 "name": "static" 00:17:13.828 } 00:17:13.828 } 00:17:13.828 ] 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "subsystem": "nvmf", 00:17:13.828 "config": [ 00:17:13.828 { 00:17:13.828 "method": "nvmf_set_config", 00:17:13.828 "params": { 00:17:13.828 "admin_cmd_passthru": { 00:17:13.828 "identify_ctrlr": false 00:17:13.828 }, 00:17:13.828 "discovery_filter": "match_any" 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "nvmf_set_max_subsystems", 00:17:13.828 "params": { 00:17:13.828 "max_subsystems": 1024 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "nvmf_set_crdt", 00:17:13.828 "params": { 00:17:13.828 "crdt1": 0, 00:17:13.828 "crdt2": 0, 00:17:13.828 "crdt3": 0 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "nvmf_create_transport", 00:17:13.828 "params": { 00:17:13.828 "abort_timeout_sec": 1, 00:17:13.828 "buf_cache_size": 4294967295, 00:17:13.828 "c2h_success": false, 00:17:13.828 "dif_insert_or_strip": false, 00:17:13.828 "in_capsule_data_size": 4096, 00:17:13.828 "io_unit_size": 131072, 00:17:13.828 "max_aq_depth": 128, 00:17:13.828 "max_io_qpairs_per_ctrlr": 127, 00:17:13.828 "max_io_size": 131072, 00:17:13.828 "max_queue_depth": 128, 00:17:13.828 "num_shared_buffers": 511, 00:17:13.828 "sock_priority": 0, 00:17:13.828 "trtype": "TCP", 00:17:13.828 "zcopy": false 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "nvmf_create_subsystem", 00:17:13.828 "params": { 00:17:13.828 "allow_any_host": false, 00:17:13.828 "ana_reporting": false, 00:17:13.828 "max_cntlid": 65519, 00:17:13.828 "max_namespaces": 10, 00:17:13.828 "min_cntlid": 1, 00:17:13.828 "model_number": "SPDK bdev Controller", 00:17:13.828 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.828 "serial_number": "SPDK00000000000001" 00:17:13.828 } 00:17:13.828 }, 00:17:13.828 { 00:17:13.828 "method": "nvmf_subsystem_add_host", 00:17:13.828 "params": { 00:17:13.828 "host": "nqn.2016-06.io.spdk:host1", 00:17:13.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.829 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:13.829 } 00:17:13.829 }, 00:17:13.829 { 00:17:13.829 "method": "nvmf_subsystem_add_ns", 00:17:13.829 "params": { 00:17:13.829 "namespace": { 00:17:13.829 "bdev_name": "malloc0", 00:17:13.829 "nguid": "F2C45A726F16452C9D6472D923E7D3E5", 00:17:13.829 "nsid": 1, 00:17:13.829 "uuid": "f2c45a72-6f16-452c-9d64-72d923e7d3e5" 00:17:13.829 }, 00:17:13.829 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:13.829 } 00:17:13.829 }, 00:17:13.829 { 00:17:13.829 "method": "nvmf_subsystem_add_listener", 00:17:13.829 "params": { 00:17:13.829 "listen_address": { 00:17:13.829 "adrfam": "IPv4", 00:17:13.829 "traddr": "10.0.0.2", 00:17:13.829 "trsvcid": "4420", 00:17:13.829 "trtype": "TCP" 00:17:13.829 }, 00:17:13.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.829 "secure_channel": true 00:17:13.829 } 00:17:13.829 } 00:17:13.829 ] 00:17:13.829 } 00:17:13.829 ] 00:17:13.829 }' 00:17:13.829 02:19:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:13.829 02:19:13 -- common/autotest_common.sh@10 -- # set +x 00:17:13.829 02:19:13 -- nvmf/common.sh@469 -- # nvmfpid=88746 00:17:13.829 02:19:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:13.829 02:19:13 -- nvmf/common.sh@470 -- # waitforlisten 88746 00:17:13.829 02:19:13 -- common/autotest_common.sh@819 -- # '[' -z 88746 ']' 00:17:13.829 02:19:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.829 02:19:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.829 02:19:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.829 02:19:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.829 02:19:13 -- common/autotest_common.sh@10 -- # set +x 00:17:14.088 [2024-07-15 02:19:13.386590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:14.088 [2024-07-15 02:19:13.386727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.088 [2024-07-15 02:19:13.516742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.088 [2024-07-15 02:19:13.597378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.088 [2024-07-15 02:19:13.597554] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.088 [2024-07-15 02:19:13.597567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.088 [2024-07-15 02:19:13.597575] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.088 [2024-07-15 02:19:13.597600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.347 [2024-07-15 02:19:13.809552] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.347 [2024-07-15 02:19:13.841503] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.347 [2024-07-15 02:19:13.841732] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.912 02:19:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:14.912 02:19:14 -- common/autotest_common.sh@852 -- # return 0 00:17:14.912 02:19:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:14.912 02:19:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:14.912 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:17:14.912 02:19:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.912 02:19:14 -- target/tls.sh@216 -- # bdevperf_pid=88789 00:17:14.912 02:19:14 -- target/tls.sh@217 -- # waitforlisten 88789 /var/tmp/bdevperf.sock 00:17:14.912 02:19:14 -- common/autotest_common.sh@819 -- # '[' -z 88789 ']' 00:17:14.912 02:19:14 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:14.912 02:19:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.912 02:19:14 -- target/tls.sh@213 -- # echo '{ 00:17:14.912 "subsystems": [ 00:17:14.912 { 00:17:14.912 "subsystem": "iobuf", 00:17:14.912 "config": [ 00:17:14.912 { 00:17:14.912 "method": "iobuf_set_options", 00:17:14.912 "params": { 00:17:14.912 "large_bufsize": 135168, 00:17:14.912 "large_pool_count": 1024, 00:17:14.912 "small_bufsize": 8192, 00:17:14.913 "small_pool_count": 8192 00:17:14.913 } 00:17:14.913 } 00:17:14.913 ] 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "subsystem": "sock", 00:17:14.913 "config": [ 00:17:14.913 { 00:17:14.913 "method": "sock_impl_set_options", 00:17:14.913 "params": { 00:17:14.913 "enable_ktls": false, 00:17:14.913 "enable_placement_id": 0, 00:17:14.913 "enable_quickack": false, 00:17:14.913 "enable_recv_pipe": true, 00:17:14.913 "enable_zerocopy_send_client": false, 00:17:14.913 "enable_zerocopy_send_server": true, 00:17:14.913 "impl_name": "posix", 00:17:14.913 "recv_buf_size": 2097152, 00:17:14.913 "send_buf_size": 2097152, 00:17:14.913 "tls_version": 0, 00:17:14.913 "zerocopy_threshold": 0 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "sock_impl_set_options", 00:17:14.913 "params": { 00:17:14.913 "enable_ktls": false, 00:17:14.913 "enable_placement_id": 0, 00:17:14.913 "enable_quickack": false, 00:17:14.913 "enable_recv_pipe": true, 00:17:14.913 "enable_zerocopy_send_client": false, 00:17:14.913 "enable_zerocopy_send_server": true, 00:17:14.913 "impl_name": "ssl", 00:17:14.913 "recv_buf_size": 4096, 00:17:14.913 "send_buf_size": 4096, 00:17:14.913 "tls_version": 0, 00:17:14.913 "zerocopy_threshold": 0 00:17:14.913 } 00:17:14.913 } 00:17:14.913 ] 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "subsystem": "vmd", 00:17:14.913 "config": [] 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "subsystem": "accel", 00:17:14.913 "config": [ 00:17:14.913 { 00:17:14.913 "method": "accel_set_options", 00:17:14.913 "params": { 00:17:14.913 "buf_count": 2048, 00:17:14.913 "large_cache_size": 16, 00:17:14.913 "sequence_count": 2048, 00:17:14.913 "small_cache_size": 128, 00:17:14.913 "task_count": 2048 00:17:14.913 } 00:17:14.913 } 00:17:14.913 ] 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "subsystem": "bdev", 00:17:14.913 "config": [ 00:17:14.913 { 00:17:14.913 "method": "bdev_set_options", 00:17:14.913 "params": { 00:17:14.913 "bdev_auto_examine": true, 00:17:14.913 "bdev_io_cache_size": 256, 00:17:14.913 "bdev_io_pool_size": 65535, 00:17:14.913 "iobuf_large_cache_size": 16, 00:17:14.913 "iobuf_small_cache_size": 128 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_raid_set_options", 00:17:14.913 "params": { 00:17:14.913 "process_window_size_kb": 1024 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_iscsi_set_options", 00:17:14.913 "params": { 00:17:14.913 "timeout_sec": 30 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_nvme_set_options", 00:17:14.913 "params": { 00:17:14.913 "action_on_timeout": "none", 00:17:14.913 "allow_accel_sequence": false, 00:17:14.913 "arbitration_burst": 0, 00:17:14.913 "bdev_retry_count": 3, 00:17:14.913 "ctrlr_loss_timeout_sec": 0, 00:17:14.913 "delay_cmd_submit": true, 00:17:14.913 "fast_io_fail_timeout_sec": 0, 00:17:14.913 "generate_uuids": false, 00:17:14.913 "high_priority_weight": 0, 00:17:14.913 "io_path_stat": false, 00:17:14.913 "io_queue_requests": 512, 00:17:14.913 "keep_alive_timeout_ms": 10000, 00:17:14.913 "low_priority_weight": 0, 00:17:14.913 "medium_priority_weight": 0, 00:17:14.913 "nvme_adminq_poll_period_us": 10000, 00:17:14.913 "nvme_ioq_poll_period_us": 0, 00:17:14.913 "reconnect_delay_sec": 0, 00:17:14.913 "timeout_admin_us": 0, 00:17:14.913 "timeout_us": 0, 00:17:14.913 "transport_ack_timeout": 0, 00:17:14.913 "transport_retry_count": 4, 00:17:14.913 "transport_tos": 0 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_nvme_attach_controller", 00:17:14.913 "params": { 00:17:14.913 "adrfam": "IPv4", 00:17:14.913 "ctrlr_loss_timeout_sec": 0, 00:17:14.913 "ddgst": false, 00:17:14.913 "fast_io_fail_timeout_sec": 0, 00:17:14.913 "hdgst": false, 00:17:14.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.913 "name": "TLSTEST", 00:17:14.913 "prchk_guard": false, 00:17:14.913 "prchk_reftag": false, 00:17:14.913 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:14.913 "reconnect_delay_sec": 0, 00:17:14.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.913 "traddr": "10.0.0.2", 00:17:14.913 "trsvcid": "4420", 00:17:14.913 "trtype": "TCP" 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_nvme_set_hotplug", 00:17:14.913 "params": { 00:17:14.913 "enable": false, 00:17:14.913 "period_us": 100000 00:17:14.913 } 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "method": "bdev_wait_for_examine" 00:17:14.913 } 00:17:14.913 ] 00:17:14.913 }, 00:17:14.913 { 00:17:14.913 "subsystem": "nbd", 00:17:14.913 "config": [] 00:17:14.913 } 00:17:14.913 ] 00:17:14.913 }' 00:17:14.913 02:19:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.913 02:19:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.913 02:19:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.913 02:19:14 -- common/autotest_common.sh@10 -- # set +x 00:17:14.913 [2024-07-15 02:19:14.382895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:14.913 [2024-07-15 02:19:14.383564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88789 ] 00:17:15.172 [2024-07-15 02:19:14.518165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.172 [2024-07-15 02:19:14.589846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.430 [2024-07-15 02:19:14.747303] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.996 02:19:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.996 02:19:15 -- common/autotest_common.sh@852 -- # return 0 00:17:15.996 02:19:15 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:15.996 Running I/O for 10 seconds... 00:17:25.995 00:17:25.995 Latency(us) 00:17:25.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.995 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:25.995 Verification LBA range: start 0x0 length 0x2000 00:17:25.995 TLSTESTn1 : 10.02 5738.96 22.42 0.00 0.00 22264.77 4557.73 18588.39 00:17:25.995 =================================================================================================================== 00:17:25.995 Total : 5738.96 22.42 0.00 0.00 22264.77 4557.73 18588.39 00:17:25.995 0 00:17:25.995 02:19:25 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.995 02:19:25 -- target/tls.sh@223 -- # killprocess 88789 00:17:25.995 02:19:25 -- common/autotest_common.sh@926 -- # '[' -z 88789 ']' 00:17:25.995 02:19:25 -- common/autotest_common.sh@930 -- # kill -0 88789 00:17:25.995 02:19:25 -- common/autotest_common.sh@931 -- # uname 00:17:25.995 02:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.995 02:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88789 00:17:25.995 02:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:25.995 02:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:25.995 02:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88789' 00:17:25.995 killing process with pid 88789 00:17:25.995 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.995 00:17:25.995 Latency(us) 00:17:25.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.995 =================================================================================================================== 00:17:25.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.995 02:19:25 -- common/autotest_common.sh@945 -- # kill 88789 00:17:25.995 02:19:25 -- common/autotest_common.sh@950 -- # wait 88789 00:17:26.253 02:19:25 -- target/tls.sh@224 -- # killprocess 88746 00:17:26.253 02:19:25 -- common/autotest_common.sh@926 -- # '[' -z 88746 ']' 00:17:26.253 02:19:25 -- common/autotest_common.sh@930 -- # kill -0 88746 00:17:26.253 02:19:25 -- common/autotest_common.sh@931 -- # uname 00:17:26.253 02:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.253 02:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88746 00:17:26.253 killing process with pid 88746 00:17:26.253 02:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:26.253 02:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:26.253 02:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88746' 00:17:26.253 02:19:25 -- common/autotest_common.sh@945 -- # kill 88746 00:17:26.253 02:19:25 -- common/autotest_common.sh@950 -- # wait 88746 00:17:26.511 02:19:25 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:26.512 02:19:25 -- target/tls.sh@227 -- # cleanup 00:17:26.512 02:19:25 -- target/tls.sh@15 -- # process_shm --id 0 00:17:26.512 02:19:25 -- common/autotest_common.sh@796 -- # type=--id 00:17:26.512 02:19:25 -- common/autotest_common.sh@797 -- # id=0 00:17:26.512 02:19:25 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:26.512 02:19:25 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:26.512 02:19:25 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:26.512 02:19:25 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:26.512 02:19:25 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:26.512 02:19:25 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:26.512 nvmf_trace.0 00:17:26.512 02:19:25 -- common/autotest_common.sh@811 -- # return 0 00:17:26.512 02:19:25 -- target/tls.sh@16 -- # killprocess 88789 00:17:26.512 02:19:25 -- common/autotest_common.sh@926 -- # '[' -z 88789 ']' 00:17:26.512 02:19:25 -- common/autotest_common.sh@930 -- # kill -0 88789 00:17:26.512 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88789) - No such process 00:17:26.512 Process with pid 88789 is not found 00:17:26.512 02:19:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88789 is not found' 00:17:26.512 02:19:25 -- target/tls.sh@17 -- # nvmftestfini 00:17:26.512 02:19:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.512 02:19:25 -- nvmf/common.sh@116 -- # sync 00:17:26.512 02:19:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:26.512 02:19:25 -- nvmf/common.sh@119 -- # set +e 00:17:26.512 02:19:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.512 02:19:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:26.512 rmmod nvme_tcp 00:17:26.512 rmmod nvme_fabrics 00:17:26.512 rmmod nvme_keyring 00:17:26.512 02:19:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.512 02:19:26 -- nvmf/common.sh@123 -- # set -e 00:17:26.512 02:19:26 -- nvmf/common.sh@124 -- # return 0 00:17:26.512 02:19:26 -- nvmf/common.sh@477 -- # '[' -n 88746 ']' 00:17:26.512 02:19:26 -- nvmf/common.sh@478 -- # killprocess 88746 00:17:26.512 02:19:26 -- common/autotest_common.sh@926 -- # '[' -z 88746 ']' 00:17:26.512 02:19:26 -- common/autotest_common.sh@930 -- # kill -0 88746 00:17:26.512 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (88746) - No such process 00:17:26.512 Process with pid 88746 is not found 00:17:26.512 02:19:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 88746 is not found' 00:17:26.512 02:19:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.512 02:19:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:26.512 02:19:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:26.512 02:19:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.512 02:19:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:26.512 02:19:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.512 02:19:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.512 02:19:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.512 02:19:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:26.771 02:19:26 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.771 00:17:26.771 real 1m10.107s 00:17:26.771 user 1m47.012s 00:17:26.771 sys 0m24.696s 00:17:26.771 02:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.771 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.771 ************************************ 00:17:26.771 END TEST nvmf_tls 00:17:26.771 ************************************ 00:17:26.771 02:19:26 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:26.771 02:19:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:26.771 02:19:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:26.771 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:26.771 ************************************ 00:17:26.771 START TEST nvmf_fips 00:17:26.771 ************************************ 00:17:26.771 02:19:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:26.771 * Looking for test storage... 00:17:26.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:26.771 02:19:26 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.771 02:19:26 -- nvmf/common.sh@7 -- # uname -s 00:17:26.771 02:19:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.771 02:19:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.771 02:19:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.771 02:19:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.771 02:19:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.771 02:19:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.771 02:19:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.771 02:19:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.771 02:19:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.771 02:19:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.771 02:19:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:26.771 02:19:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:26.771 02:19:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.771 02:19:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.771 02:19:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.771 02:19:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.771 02:19:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.771 02:19:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.771 02:19:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.771 02:19:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.771 02:19:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.771 02:19:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.771 02:19:26 -- paths/export.sh@5 -- # export PATH 00:17:26.771 02:19:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.771 02:19:26 -- nvmf/common.sh@46 -- # : 0 00:17:26.771 02:19:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.771 02:19:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.771 02:19:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.771 02:19:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.771 02:19:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.771 02:19:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.771 02:19:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.771 02:19:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.771 02:19:26 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.771 02:19:26 -- fips/fips.sh@89 -- # check_openssl_version 00:17:26.771 02:19:26 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:26.771 02:19:26 -- fips/fips.sh@85 -- # openssl version 00:17:26.771 02:19:26 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:26.771 02:19:26 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:26.771 02:19:26 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:26.771 02:19:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:26.771 02:19:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:26.771 02:19:26 -- scripts/common.sh@335 -- # IFS=.-: 00:17:26.771 02:19:26 -- scripts/common.sh@335 -- # read -ra ver1 00:17:26.771 02:19:26 -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.771 02:19:26 -- scripts/common.sh@336 -- # read -ra ver2 00:17:26.771 02:19:26 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:26.771 02:19:26 -- scripts/common.sh@339 -- # ver1_l=3 00:17:26.771 02:19:26 -- scripts/common.sh@340 -- # ver2_l=3 00:17:26.771 02:19:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:26.771 02:19:26 -- scripts/common.sh@343 -- # case "$op" in 00:17:26.771 02:19:26 -- scripts/common.sh@347 -- # : 1 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # decimal 3 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=3 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 3 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # decimal 3 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=3 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 3 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:26.771 02:19:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.771 02:19:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v++ )) 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # decimal 0 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=0 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 0 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # decimal 0 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=0 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 0 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:26.771 02:19:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.771 02:19:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v++ )) 00:17:26.771 02:19:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # decimal 9 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=9 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 9 00:17:26.771 02:19:26 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # decimal 0 00:17:26.771 02:19:26 -- scripts/common.sh@352 -- # local d=0 00:17:26.771 02:19:26 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:26.771 02:19:26 -- scripts/common.sh@354 -- # echo 0 00:17:26.771 02:19:26 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:26.771 02:19:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:26.771 02:19:26 -- scripts/common.sh@366 -- # return 0 00:17:26.771 02:19:26 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:26.771 02:19:26 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:26.771 02:19:26 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:26.772 02:19:26 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:26.772 02:19:26 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:26.772 02:19:26 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:26.772 02:19:26 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:26.772 02:19:26 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:26.772 02:19:26 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:26.772 02:19:26 -- fips/fips.sh@114 -- # build_openssl_config 00:17:26.772 02:19:26 -- fips/fips.sh@37 -- # cat 00:17:26.772 02:19:26 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:26.772 02:19:26 -- fips/fips.sh@58 -- # cat - 00:17:26.772 02:19:26 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:26.772 02:19:26 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:26.772 02:19:26 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:26.772 02:19:26 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:26.772 02:19:26 -- fips/fips.sh@117 -- # grep name 00:17:26.772 02:19:26 -- fips/fips.sh@117 -- # openssl list -providers 00:17:27.030 02:19:26 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:27.030 02:19:26 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:27.030 02:19:26 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:27.030 02:19:26 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:27.030 02:19:26 -- fips/fips.sh@128 -- # : 00:17:27.030 02:19:26 -- common/autotest_common.sh@640 -- # local es=0 00:17:27.030 02:19:26 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:27.030 02:19:26 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:27.030 02:19:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.030 02:19:26 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:27.030 02:19:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.030 02:19:26 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:27.030 02:19:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:27.030 02:19:26 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:27.030 02:19:26 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:27.030 02:19:26 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:27.030 Error setting digest 00:17:27.030 0072A7A1F57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:27.030 0072A7A1F57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:27.030 02:19:26 -- common/autotest_common.sh@643 -- # es=1 00:17:27.030 02:19:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:27.030 02:19:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:27.030 02:19:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:27.030 02:19:26 -- fips/fips.sh@131 -- # nvmftestinit 00:17:27.030 02:19:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:27.030 02:19:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.030 02:19:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:27.031 02:19:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:27.031 02:19:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:27.031 02:19:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.031 02:19:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.031 02:19:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.031 02:19:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:27.031 02:19:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:27.031 02:19:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:27.031 02:19:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:27.031 02:19:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:27.031 02:19:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:27.031 02:19:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.031 02:19:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.031 02:19:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:27.031 02:19:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:27.031 02:19:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.031 02:19:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.031 02:19:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.031 02:19:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.031 02:19:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.031 02:19:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.031 02:19:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.031 02:19:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.031 02:19:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:27.031 02:19:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:27.031 Cannot find device "nvmf_tgt_br" 00:17:27.031 02:19:26 -- nvmf/common.sh@154 -- # true 00:17:27.031 02:19:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.031 Cannot find device "nvmf_tgt_br2" 00:17:27.031 02:19:26 -- nvmf/common.sh@155 -- # true 00:17:27.031 02:19:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:27.031 02:19:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:27.031 Cannot find device "nvmf_tgt_br" 00:17:27.031 02:19:26 -- nvmf/common.sh@157 -- # true 00:17:27.031 02:19:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:27.031 Cannot find device "nvmf_tgt_br2" 00:17:27.031 02:19:26 -- nvmf/common.sh@158 -- # true 00:17:27.031 02:19:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:27.031 02:19:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:27.031 02:19:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.290 02:19:26 -- nvmf/common.sh@161 -- # true 00:17:27.290 02:19:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.290 02:19:26 -- nvmf/common.sh@162 -- # true 00:17:27.290 02:19:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.290 02:19:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:27.290 02:19:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.290 02:19:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.290 02:19:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.290 02:19:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.290 02:19:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.290 02:19:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:27.290 02:19:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:27.290 02:19:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:27.290 02:19:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:27.290 02:19:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:27.290 02:19:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:27.290 02:19:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.290 02:19:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.290 02:19:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.290 02:19:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:27.290 02:19:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:27.290 02:19:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.290 02:19:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.290 02:19:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.290 02:19:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.290 02:19:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.290 02:19:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:27.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:27.290 00:17:27.290 --- 10.0.0.2 ping statistics --- 00:17:27.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.290 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:27.290 02:19:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:27.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:27.290 00:17:27.290 --- 10.0.0.3 ping statistics --- 00:17:27.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.290 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:27.290 02:19:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:27.290 00:17:27.290 --- 10.0.0.1 ping statistics --- 00:17:27.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.290 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:27.290 02:19:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.290 02:19:26 -- nvmf/common.sh@421 -- # return 0 00:17:27.290 02:19:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:27.290 02:19:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.290 02:19:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:27.290 02:19:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:27.290 02:19:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.290 02:19:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:27.290 02:19:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:27.290 02:19:26 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:27.290 02:19:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:27.290 02:19:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:27.290 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:27.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.290 02:19:26 -- nvmf/common.sh@469 -- # nvmfpid=89151 00:17:27.290 02:19:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:27.290 02:19:26 -- nvmf/common.sh@470 -- # waitforlisten 89151 00:17:27.290 02:19:26 -- common/autotest_common.sh@819 -- # '[' -z 89151 ']' 00:17:27.290 02:19:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.290 02:19:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.290 02:19:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.290 02:19:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.290 02:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:27.549 [2024-07-15 02:19:26.868456] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:27.549 [2024-07-15 02:19:26.868804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.549 [2024-07-15 02:19:27.001991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.549 [2024-07-15 02:19:27.083263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.549 [2024-07-15 02:19:27.083588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.549 [2024-07-15 02:19:27.083727] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.549 [2024-07-15 02:19:27.083850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.549 [2024-07-15 02:19:27.084064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.484 02:19:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.484 02:19:27 -- common/autotest_common.sh@852 -- # return 0 00:17:28.484 02:19:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:28.484 02:19:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:28.484 02:19:27 -- common/autotest_common.sh@10 -- # set +x 00:17:28.484 02:19:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.484 02:19:27 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:28.484 02:19:27 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:28.484 02:19:27 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:28.484 02:19:27 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:28.484 02:19:27 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:28.484 02:19:27 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:28.484 02:19:27 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:28.484 02:19:27 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.743 [2024-07-15 02:19:28.071534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.743 [2024-07-15 02:19:28.087497] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:28.743 [2024-07-15 02:19:28.087720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.743 malloc0 00:17:28.743 02:19:28 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.743 02:19:28 -- fips/fips.sh@148 -- # bdevperf_pid=89211 00:17:28.743 02:19:28 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.743 02:19:28 -- fips/fips.sh@149 -- # waitforlisten 89211 /var/tmp/bdevperf.sock 00:17:28.743 02:19:28 -- common/autotest_common.sh@819 -- # '[' -z 89211 ']' 00:17:28.743 02:19:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.743 02:19:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.743 02:19:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.743 02:19:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.743 02:19:28 -- common/autotest_common.sh@10 -- # set +x 00:17:28.743 [2024-07-15 02:19:28.226988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:28.743 [2024-07-15 02:19:28.227083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89211 ] 00:17:29.001 [2024-07-15 02:19:28.365116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.001 [2024-07-15 02:19:28.441054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.567 02:19:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.567 02:19:29 -- common/autotest_common.sh@852 -- # return 0 00:17:29.567 02:19:29 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:29.825 [2024-07-15 02:19:29.311631] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.083 TLSTESTn1 00:17:30.083 02:19:29 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:30.083 Running I/O for 10 seconds... 00:17:40.074 00:17:40.074 Latency(us) 00:17:40.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:40.074 Verification LBA range: start 0x0 length 0x2000 00:17:40.074 TLSTESTn1 : 10.01 6146.20 24.01 0.00 0.00 20793.06 5242.88 24188.74 00:17:40.074 =================================================================================================================== 00:17:40.074 Total : 6146.20 24.01 0.00 0.00 20793.06 5242.88 24188.74 00:17:40.074 0 00:17:40.074 02:19:39 -- fips/fips.sh@1 -- # cleanup 00:17:40.074 02:19:39 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:40.074 02:19:39 -- common/autotest_common.sh@796 -- # type=--id 00:17:40.074 02:19:39 -- common/autotest_common.sh@797 -- # id=0 00:17:40.074 02:19:39 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:40.074 02:19:39 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:40.074 02:19:39 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:40.074 02:19:39 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:40.074 02:19:39 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:40.074 02:19:39 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:40.074 nvmf_trace.0 00:17:40.074 02:19:39 -- common/autotest_common.sh@811 -- # return 0 00:17:40.074 02:19:39 -- fips/fips.sh@16 -- # killprocess 89211 00:17:40.074 02:19:39 -- common/autotest_common.sh@926 -- # '[' -z 89211 ']' 00:17:40.074 02:19:39 -- common/autotest_common.sh@930 -- # kill -0 89211 00:17:40.074 02:19:39 -- common/autotest_common.sh@931 -- # uname 00:17:40.074 02:19:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:40.074 02:19:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89211 00:17:40.331 02:19:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:40.331 02:19:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:40.331 killing process with pid 89211 00:17:40.331 02:19:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89211' 00:17:40.331 02:19:39 -- common/autotest_common.sh@945 -- # kill 89211 00:17:40.331 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.331 00:17:40.331 Latency(us) 00:17:40.331 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.331 =================================================================================================================== 00:17:40.331 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.331 02:19:39 -- common/autotest_common.sh@950 -- # wait 89211 00:17:40.331 02:19:39 -- fips/fips.sh@17 -- # nvmftestfini 00:17:40.331 02:19:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:40.331 02:19:39 -- nvmf/common.sh@116 -- # sync 00:17:40.588 02:19:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:40.588 02:19:39 -- nvmf/common.sh@119 -- # set +e 00:17:40.588 02:19:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:40.588 02:19:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:40.588 rmmod nvme_tcp 00:17:40.588 rmmod nvme_fabrics 00:17:40.588 rmmod nvme_keyring 00:17:40.588 02:19:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:40.588 02:19:39 -- nvmf/common.sh@123 -- # set -e 00:17:40.588 02:19:39 -- nvmf/common.sh@124 -- # return 0 00:17:40.588 02:19:39 -- nvmf/common.sh@477 -- # '[' -n 89151 ']' 00:17:40.588 02:19:39 -- nvmf/common.sh@478 -- # killprocess 89151 00:17:40.588 02:19:39 -- common/autotest_common.sh@926 -- # '[' -z 89151 ']' 00:17:40.588 02:19:39 -- common/autotest_common.sh@930 -- # kill -0 89151 00:17:40.588 02:19:39 -- common/autotest_common.sh@931 -- # uname 00:17:40.588 02:19:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:40.588 02:19:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89151 00:17:40.588 02:19:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:40.588 02:19:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:40.588 killing process with pid 89151 00:17:40.588 02:19:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89151' 00:17:40.588 02:19:39 -- common/autotest_common.sh@945 -- # kill 89151 00:17:40.588 02:19:39 -- common/autotest_common.sh@950 -- # wait 89151 00:17:40.846 02:19:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:40.846 02:19:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:40.846 02:19:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:40.846 02:19:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.846 02:19:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:40.846 02:19:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.846 02:19:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.846 02:19:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.846 02:19:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:40.846 02:19:40 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:40.846 00:17:40.846 real 0m14.111s 00:17:40.846 user 0m18.590s 00:17:40.846 sys 0m5.946s 00:17:40.846 02:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.846 ************************************ 00:17:40.846 END TEST nvmf_fips 00:17:40.846 ************************************ 00:17:40.846 02:19:40 -- common/autotest_common.sh@10 -- # set +x 00:17:40.846 02:19:40 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:40.846 02:19:40 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:40.846 02:19:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.846 02:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.846 02:19:40 -- common/autotest_common.sh@10 -- # set +x 00:17:40.846 ************************************ 00:17:40.846 START TEST nvmf_fuzz 00:17:40.846 ************************************ 00:17:40.846 02:19:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:40.846 * Looking for test storage... 00:17:40.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:40.846 02:19:40 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.846 02:19:40 -- nvmf/common.sh@7 -- # uname -s 00:17:40.846 02:19:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.846 02:19:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.846 02:19:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.846 02:19:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.846 02:19:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.846 02:19:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.846 02:19:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.846 02:19:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.846 02:19:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.846 02:19:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.846 02:19:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:40.846 02:19:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:40.846 02:19:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.846 02:19:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.846 02:19:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.846 02:19:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.846 02:19:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.846 02:19:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.846 02:19:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.846 02:19:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.846 02:19:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.846 02:19:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.846 02:19:40 -- paths/export.sh@5 -- # export PATH 00:17:40.847 02:19:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.847 02:19:40 -- nvmf/common.sh@46 -- # : 0 00:17:40.847 02:19:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.847 02:19:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.847 02:19:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.847 02:19:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.847 02:19:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.847 02:19:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.847 02:19:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.847 02:19:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.847 02:19:40 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:40.847 02:19:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.847 02:19:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.847 02:19:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.847 02:19:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.847 02:19:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.847 02:19:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.847 02:19:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.847 02:19:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.847 02:19:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:40.847 02:19:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:40.847 02:19:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:40.847 02:19:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:40.847 02:19:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:40.847 02:19:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:40.847 02:19:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.847 02:19:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.847 02:19:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.847 02:19:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:40.847 02:19:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.847 02:19:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.847 02:19:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.847 02:19:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.847 02:19:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.847 02:19:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.847 02:19:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.847 02:19:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.847 02:19:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:40.847 02:19:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:41.105 Cannot find device "nvmf_tgt_br" 00:17:41.105 02:19:40 -- nvmf/common.sh@154 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:41.105 Cannot find device "nvmf_tgt_br2" 00:17:41.105 02:19:40 -- nvmf/common.sh@155 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:41.105 02:19:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:41.105 Cannot find device "nvmf_tgt_br" 00:17:41.105 02:19:40 -- nvmf/common.sh@157 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:41.105 Cannot find device "nvmf_tgt_br2" 00:17:41.105 02:19:40 -- nvmf/common.sh@158 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:41.105 02:19:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:41.105 02:19:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.105 02:19:40 -- nvmf/common.sh@161 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.105 02:19:40 -- nvmf/common.sh@162 -- # true 00:17:41.105 02:19:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.105 02:19:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.105 02:19:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.105 02:19:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.105 02:19:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.105 02:19:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.105 02:19:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.105 02:19:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.105 02:19:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.105 02:19:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:41.105 02:19:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:41.105 02:19:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:41.105 02:19:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:41.105 02:19:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.105 02:19:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.105 02:19:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.105 02:19:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:41.105 02:19:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:41.105 02:19:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.105 02:19:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.362 02:19:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.362 02:19:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.362 02:19:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.362 02:19:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:41.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:41.362 00:17:41.362 --- 10.0.0.2 ping statistics --- 00:17:41.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.362 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:41.362 02:19:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:41.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:41.362 00:17:41.362 --- 10.0.0.3 ping statistics --- 00:17:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.363 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:41.363 02:19:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:41.363 00:17:41.363 --- 10.0.0.1 ping statistics --- 00:17:41.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.363 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:41.363 02:19:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.363 02:19:40 -- nvmf/common.sh@421 -- # return 0 00:17:41.363 02:19:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:41.363 02:19:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.363 02:19:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:41.363 02:19:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:41.363 02:19:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.363 02:19:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:41.363 02:19:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:41.363 02:19:40 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=89547 00:17:41.363 02:19:40 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:41.363 02:19:40 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:41.363 02:19:40 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 89547 00:17:41.363 02:19:40 -- common/autotest_common.sh@819 -- # '[' -z 89547 ']' 00:17:41.363 02:19:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.363 02:19:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.363 02:19:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.363 02:19:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.363 02:19:40 -- common/autotest_common.sh@10 -- # set +x 00:17:42.295 02:19:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.295 02:19:41 -- common/autotest_common.sh@852 -- # return 0 00:17:42.295 02:19:41 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.295 02:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.295 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.295 02:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.295 02:19:41 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:42.295 02:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.295 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.295 Malloc0 00:17:42.295 02:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.295 02:19:41 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.295 02:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.295 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 02:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.552 02:19:41 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.552 02:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.552 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 02:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.552 02:19:41 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.552 02:19:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:42.552 02:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:42.552 02:19:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:42.552 02:19:41 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:42.552 02:19:41 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:42.810 Shutting down the fuzz application 00:17:42.810 02:19:42 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:43.067 Shutting down the fuzz application 00:17:43.067 02:19:42 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.067 02:19:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.067 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.067 02:19:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.067 02:19:42 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:43.067 02:19:42 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:43.067 02:19:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.067 02:19:42 -- nvmf/common.sh@116 -- # sync 00:17:43.067 02:19:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.067 02:19:42 -- nvmf/common.sh@119 -- # set +e 00:17:43.067 02:19:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.067 02:19:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.067 rmmod nvme_tcp 00:17:43.067 rmmod nvme_fabrics 00:17:43.067 rmmod nvme_keyring 00:17:43.326 02:19:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.326 02:19:42 -- nvmf/common.sh@123 -- # set -e 00:17:43.326 02:19:42 -- nvmf/common.sh@124 -- # return 0 00:17:43.326 02:19:42 -- nvmf/common.sh@477 -- # '[' -n 89547 ']' 00:17:43.326 02:19:42 -- nvmf/common.sh@478 -- # killprocess 89547 00:17:43.326 02:19:42 -- common/autotest_common.sh@926 -- # '[' -z 89547 ']' 00:17:43.326 02:19:42 -- common/autotest_common.sh@930 -- # kill -0 89547 00:17:43.326 02:19:42 -- common/autotest_common.sh@931 -- # uname 00:17:43.326 02:19:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:43.326 02:19:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89547 00:17:43.326 02:19:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:43.326 02:19:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:43.326 02:19:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89547' 00:17:43.326 killing process with pid 89547 00:17:43.326 02:19:42 -- common/autotest_common.sh@945 -- # kill 89547 00:17:43.326 02:19:42 -- common/autotest_common.sh@950 -- # wait 89547 00:17:43.326 02:19:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:43.326 02:19:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:43.326 02:19:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:43.326 02:19:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.326 02:19:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:43.326 02:19:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.326 02:19:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.326 02:19:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.585 02:19:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:43.585 02:19:42 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:43.585 00:17:43.585 real 0m2.637s 00:17:43.585 user 0m2.763s 00:17:43.585 sys 0m0.639s 00:17:43.585 02:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.585 ************************************ 00:17:43.585 END TEST nvmf_fuzz 00:17:43.585 ************************************ 00:17:43.585 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 02:19:42 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:43.585 02:19:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:43.585 02:19:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:43.585 02:19:42 -- common/autotest_common.sh@10 -- # set +x 00:17:43.585 ************************************ 00:17:43.585 START TEST nvmf_multiconnection 00:17:43.585 ************************************ 00:17:43.585 02:19:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:43.585 * Looking for test storage... 00:17:43.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:43.585 02:19:43 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.586 02:19:43 -- nvmf/common.sh@7 -- # uname -s 00:17:43.586 02:19:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.586 02:19:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.586 02:19:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.586 02:19:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.586 02:19:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.586 02:19:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.586 02:19:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.586 02:19:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.586 02:19:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.586 02:19:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:43.586 02:19:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:17:43.586 02:19:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.586 02:19:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.586 02:19:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.586 02:19:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.586 02:19:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.586 02:19:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.586 02:19:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.586 02:19:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.586 02:19:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.586 02:19:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.586 02:19:43 -- paths/export.sh@5 -- # export PATH 00:17:43.586 02:19:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.586 02:19:43 -- nvmf/common.sh@46 -- # : 0 00:17:43.586 02:19:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:43.586 02:19:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:43.586 02:19:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:43.586 02:19:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.586 02:19:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.586 02:19:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:43.586 02:19:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:43.586 02:19:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:43.586 02:19:43 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.586 02:19:43 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.586 02:19:43 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:43.586 02:19:43 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:43.586 02:19:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:43.586 02:19:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.586 02:19:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:43.586 02:19:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:43.586 02:19:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:43.586 02:19:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.586 02:19:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.586 02:19:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.586 02:19:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:43.586 02:19:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:43.586 02:19:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.586 02:19:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.586 02:19:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:43.586 02:19:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:43.586 02:19:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.586 02:19:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.586 02:19:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.586 02:19:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.586 02:19:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.586 02:19:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.586 02:19:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.586 02:19:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.586 02:19:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:43.586 02:19:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:43.586 Cannot find device "nvmf_tgt_br" 00:17:43.586 02:19:43 -- nvmf/common.sh@154 -- # true 00:17:43.586 02:19:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.586 Cannot find device "nvmf_tgt_br2" 00:17:43.586 02:19:43 -- nvmf/common.sh@155 -- # true 00:17:43.586 02:19:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:43.586 02:19:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:43.586 Cannot find device "nvmf_tgt_br" 00:17:43.586 02:19:43 -- nvmf/common.sh@157 -- # true 00:17:43.586 02:19:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:43.851 Cannot find device "nvmf_tgt_br2" 00:17:43.851 02:19:43 -- nvmf/common.sh@158 -- # true 00:17:43.851 02:19:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:43.851 02:19:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:43.851 02:19:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.851 02:19:43 -- nvmf/common.sh@161 -- # true 00:17:43.851 02:19:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.851 02:19:43 -- nvmf/common.sh@162 -- # true 00:17:43.851 02:19:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.851 02:19:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.851 02:19:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.851 02:19:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.851 02:19:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.851 02:19:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.851 02:19:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.851 02:19:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:43.851 02:19:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:43.851 02:19:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:43.851 02:19:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:43.851 02:19:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:43.851 02:19:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:43.851 02:19:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.851 02:19:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.851 02:19:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.851 02:19:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:43.851 02:19:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:43.851 02:19:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.851 02:19:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.851 02:19:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.108 02:19:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.108 02:19:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.108 02:19:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:44.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:17:44.108 00:17:44.108 --- 10.0.0.2 ping statistics --- 00:17:44.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.108 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:44.108 02:19:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:44.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:44.108 00:17:44.108 --- 10.0.0.3 ping statistics --- 00:17:44.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.108 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:44.108 02:19:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:44.108 00:17:44.108 --- 10.0.0.1 ping statistics --- 00:17:44.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.108 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:44.108 02:19:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.108 02:19:43 -- nvmf/common.sh@421 -- # return 0 00:17:44.108 02:19:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:44.108 02:19:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.108 02:19:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:44.108 02:19:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:44.108 02:19:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.108 02:19:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:44.108 02:19:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:44.108 02:19:43 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:44.108 02:19:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:44.108 02:19:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:44.108 02:19:43 -- common/autotest_common.sh@10 -- # set +x 00:17:44.108 02:19:43 -- nvmf/common.sh@469 -- # nvmfpid=89753 00:17:44.108 02:19:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.108 02:19:43 -- nvmf/common.sh@470 -- # waitforlisten 89753 00:17:44.108 02:19:43 -- common/autotest_common.sh@819 -- # '[' -z 89753 ']' 00:17:44.108 02:19:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.108 02:19:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.108 02:19:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.108 02:19:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.108 02:19:43 -- common/autotest_common.sh@10 -- # set +x 00:17:44.108 [2024-07-15 02:19:43.523088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:44.108 [2024-07-15 02:19:43.523196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.108 [2024-07-15 02:19:43.657894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.365 [2024-07-15 02:19:43.745971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.365 [2024-07-15 02:19:43.746141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.365 [2024-07-15 02:19:43.746155] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.365 [2024-07-15 02:19:43.746164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.365 [2024-07-15 02:19:43.746849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.365 [2024-07-15 02:19:43.746935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.365 [2024-07-15 02:19:43.747032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.365 [2024-07-15 02:19:43.747040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.979 02:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.979 02:19:44 -- common/autotest_common.sh@852 -- # return 0 00:17:44.979 02:19:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.979 02:19:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:44.979 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 02:19:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.979 02:19:44 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.979 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:44.979 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:44.979 [2024-07-15 02:19:44.527623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.236 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.236 02:19:44 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:45.236 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.236 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:45.236 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.236 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.236 Malloc1 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 [2024-07-15 02:19:44.603450] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.237 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 Malloc2 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.237 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 Malloc3 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.237 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 Malloc4 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.237 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 Malloc5 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.237 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:45.237 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.237 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.495 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:45.495 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.495 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 Malloc6 00:17:45.495 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:45.495 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.495 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:45.495 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.495 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:45.495 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.495 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.495 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:45.495 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 Malloc7 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.496 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 Malloc8 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.496 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 Malloc9 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:44 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.496 02:19:44 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:45.496 02:19:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 Malloc10 00:17:45.496 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:45.496 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:45.496 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:45.496 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.496 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.496 02:19:45 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.496 02:19:45 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:45.496 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.496 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.754 Malloc11 00:17:45.754 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.754 02:19:45 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:45.754 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.754 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.754 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.754 02:19:45 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:45.754 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.754 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.754 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.754 02:19:45 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:45.754 02:19:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.754 02:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:45.754 02:19:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.754 02:19:45 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:45.754 02:19:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:45.754 02:19:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.754 02:19:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:45.754 02:19:45 -- common/autotest_common.sh@1177 -- # local i=0 00:17:45.754 02:19:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.754 02:19:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:45.754 02:19:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:48.278 02:19:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:48.278 02:19:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:48.278 02:19:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:17:48.278 02:19:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:48.278 02:19:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.278 02:19:47 -- common/autotest_common.sh@1187 -- # return 0 00:17:48.278 02:19:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:48.278 02:19:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:17:48.278 02:19:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:17:48.278 02:19:47 -- common/autotest_common.sh@1177 -- # local i=0 00:17:48.278 02:19:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.278 02:19:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:48.278 02:19:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:50.177 02:19:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:50.177 02:19:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:50.177 02:19:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:17:50.177 02:19:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:50.177 02:19:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.177 02:19:49 -- common/autotest_common.sh@1187 -- # return 0 00:17:50.177 02:19:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:50.177 02:19:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:17:50.177 02:19:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:17:50.177 02:19:49 -- common/autotest_common.sh@1177 -- # local i=0 00:17:50.177 02:19:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.177 02:19:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:50.177 02:19:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:52.705 02:19:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:52.705 02:19:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:52.705 02:19:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:17:52.705 02:19:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:52.705 02:19:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:52.705 02:19:51 -- common/autotest_common.sh@1187 -- # return 0 00:17:52.705 02:19:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:52.705 02:19:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:17:52.705 02:19:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:17:52.705 02:19:51 -- common/autotest_common.sh@1177 -- # local i=0 00:17:52.705 02:19:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.705 02:19:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:52.705 02:19:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:54.602 02:19:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:54.602 02:19:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:54.602 02:19:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:17:54.602 02:19:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:54.602 02:19:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.602 02:19:53 -- common/autotest_common.sh@1187 -- # return 0 00:17:54.602 02:19:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:54.602 02:19:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:17:54.602 02:19:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:17:54.602 02:19:54 -- common/autotest_common.sh@1177 -- # local i=0 00:17:54.602 02:19:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.602 02:19:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:54.602 02:19:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:56.509 02:19:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:56.509 02:19:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:56.509 02:19:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:17:56.509 02:19:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:56.509 02:19:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.509 02:19:56 -- common/autotest_common.sh@1187 -- # return 0 00:17:56.509 02:19:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:56.509 02:19:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:17:56.767 02:19:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:17:56.767 02:19:56 -- common/autotest_common.sh@1177 -- # local i=0 00:17:56.767 02:19:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.767 02:19:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:56.767 02:19:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:58.668 02:19:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:58.926 02:19:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:58.926 02:19:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:17:58.926 02:19:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:58.926 02:19:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.926 02:19:58 -- common/autotest_common.sh@1187 -- # return 0 00:17:58.926 02:19:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.926 02:19:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:17:58.926 02:19:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:17:58.926 02:19:58 -- common/autotest_common.sh@1177 -- # local i=0 00:17:58.926 02:19:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.926 02:19:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:58.926 02:19:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:01.453 02:20:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:01.453 02:20:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:01.453 02:20:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:01.453 02:20:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:01.453 02:20:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.453 02:20:00 -- common/autotest_common.sh@1187 -- # return 0 00:18:01.453 02:20:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.453 02:20:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:01.453 02:20:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:01.453 02:20:00 -- common/autotest_common.sh@1177 -- # local i=0 00:18:01.453 02:20:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.453 02:20:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:01.453 02:20:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:03.355 02:20:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:03.355 02:20:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:03.355 02:20:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:03.355 02:20:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:03.355 02:20:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.355 02:20:02 -- common/autotest_common.sh@1187 -- # return 0 00:18:03.355 02:20:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.355 02:20:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:03.355 02:20:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:03.355 02:20:02 -- common/autotest_common.sh@1177 -- # local i=0 00:18:03.355 02:20:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.355 02:20:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:03.355 02:20:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:05.253 02:20:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:05.253 02:20:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:05.253 02:20:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:05.511 02:20:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:05.511 02:20:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.511 02:20:04 -- common/autotest_common.sh@1187 -- # return 0 00:18:05.511 02:20:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.511 02:20:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:05.511 02:20:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:05.511 02:20:05 -- common/autotest_common.sh@1177 -- # local i=0 00:18:05.511 02:20:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.511 02:20:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:05.511 02:20:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:08.046 02:20:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:08.046 02:20:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:08.046 02:20:07 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:08.046 02:20:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:08.046 02:20:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.046 02:20:07 -- common/autotest_common.sh@1187 -- # return 0 00:18:08.046 02:20:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.046 02:20:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:08.046 02:20:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:08.046 02:20:07 -- common/autotest_common.sh@1177 -- # local i=0 00:18:08.046 02:20:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.046 02:20:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:08.046 02:20:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:09.950 02:20:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:09.950 02:20:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:09.950 02:20:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:09.950 02:20:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:09.950 02:20:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.950 02:20:09 -- common/autotest_common.sh@1187 -- # return 0 00:18:09.950 02:20:09 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:09.950 [global] 00:18:09.950 thread=1 00:18:09.950 invalidate=1 00:18:09.950 rw=read 00:18:09.950 time_based=1 00:18:09.950 runtime=10 00:18:09.950 ioengine=libaio 00:18:09.950 direct=1 00:18:09.950 bs=262144 00:18:09.950 iodepth=64 00:18:09.950 norandommap=1 00:18:09.950 numjobs=1 00:18:09.950 00:18:09.950 [job0] 00:18:09.950 filename=/dev/nvme0n1 00:18:09.950 [job1] 00:18:09.950 filename=/dev/nvme10n1 00:18:09.950 [job2] 00:18:09.950 filename=/dev/nvme1n1 00:18:09.950 [job3] 00:18:09.950 filename=/dev/nvme2n1 00:18:09.950 [job4] 00:18:09.950 filename=/dev/nvme3n1 00:18:09.950 [job5] 00:18:09.950 filename=/dev/nvme4n1 00:18:09.950 [job6] 00:18:09.950 filename=/dev/nvme5n1 00:18:09.950 [job7] 00:18:09.950 filename=/dev/nvme6n1 00:18:09.950 [job8] 00:18:09.950 filename=/dev/nvme7n1 00:18:09.950 [job9] 00:18:09.950 filename=/dev/nvme8n1 00:18:09.950 [job10] 00:18:09.950 filename=/dev/nvme9n1 00:18:09.950 Could not set queue depth (nvme0n1) 00:18:09.950 Could not set queue depth (nvme10n1) 00:18:09.950 Could not set queue depth (nvme1n1) 00:18:09.950 Could not set queue depth (nvme2n1) 00:18:09.950 Could not set queue depth (nvme3n1) 00:18:09.951 Could not set queue depth (nvme4n1) 00:18:09.951 Could not set queue depth (nvme5n1) 00:18:09.951 Could not set queue depth (nvme6n1) 00:18:09.951 Could not set queue depth (nvme7n1) 00:18:09.951 Could not set queue depth (nvme8n1) 00:18:09.951 Could not set queue depth (nvme9n1) 00:18:10.210 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:10.210 fio-3.35 00:18:10.210 Starting 11 threads 00:18:22.412 00:18:22.412 job0: (groupid=0, jobs=1): err= 0: pid=90229: Mon Jul 15 02:20:19 2024 00:18:22.412 read: IOPS=985, BW=246MiB/s (258MB/s)(2488MiB/10092msec) 00:18:22.412 slat (usec): min=21, max=94590, avg=998.92, stdev=4201.65 00:18:22.412 clat (msec): min=13, max=200, avg=63.82, stdev=39.02 00:18:22.412 lat (msec): min=13, max=200, avg=64.82, stdev=39.74 00:18:22.412 clat percentiles (msec): 00:18:22.412 | 1.00th=[ 19], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 28], 00:18:22.412 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 41], 60.00th=[ 73], 00:18:22.412 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 120], 00:18:22.412 | 99.00th=[ 133], 99.50th=[ 150], 99.90th=[ 201], 99.95th=[ 201], 00:18:22.412 | 99.99th=[ 201] 00:18:22.412 bw ( KiB/s): min=140007, max=559616, per=13.51%, avg=253025.80, stdev=163958.67, samples=20 00:18:22.412 iops : min= 546, max= 2186, avg=988.30, stdev=640.48, samples=20 00:18:22.412 lat (msec) : 20=2.34%, 50=49.95%, 100=15.02%, 250=32.69% 00:18:22.412 cpu : usr=0.33%, sys=3.12%, ctx=1825, majf=0, minf=4097 00:18:22.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:22.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.412 issued rwts: total=9950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.412 job1: (groupid=0, jobs=1): err= 0: pid=90230: Mon Jul 15 02:20:19 2024 00:18:22.412 read: IOPS=556, BW=139MiB/s (146MB/s)(1402MiB/10079msec) 00:18:22.412 slat (usec): min=17, max=82073, avg=1780.80, stdev=6296.62 00:18:22.412 clat (msec): min=19, max=190, avg=113.11, stdev=14.15 00:18:22.412 lat (msec): min=21, max=196, avg=114.89, stdev=15.36 00:18:22.412 clat percentiles (msec): 00:18:22.412 | 1.00th=[ 71], 5.00th=[ 88], 10.00th=[ 95], 20.00th=[ 104], 00:18:22.412 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:18:22.412 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 134], 00:18:22.412 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 190], 00:18:22.412 | 99.99th=[ 192] 00:18:22.412 bw ( KiB/s): min=126976, max=175426, per=7.57%, avg=141811.00, stdev=11283.40, samples=20 00:18:22.412 iops : min= 496, max= 685, avg=553.80, stdev=44.04, samples=20 00:18:22.412 lat (msec) : 20=0.02%, 50=0.04%, 100=14.57%, 250=85.37% 00:18:22.412 cpu : usr=0.25%, sys=2.13%, ctx=1059, majf=0, minf=4097 00:18:22.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:22.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.412 issued rwts: total=5606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.412 job2: (groupid=0, jobs=1): err= 0: pid=90231: Mon Jul 15 02:20:19 2024 00:18:22.412 read: IOPS=1128, BW=282MiB/s (296MB/s)(2830MiB/10031msec) 00:18:22.412 slat (usec): min=20, max=39239, avg=878.59, stdev=3313.62 00:18:22.412 clat (usec): min=19102, max=94068, avg=55747.89, stdev=8193.66 00:18:22.412 lat (msec): min=19, max=102, avg=56.63, stdev= 8.52 00:18:22.412 clat percentiles (usec): 00:18:22.412 | 1.00th=[35390], 5.00th=[42206], 10.00th=[45876], 20.00th=[49021], 00:18:22.412 | 30.00th=[51643], 40.00th=[53740], 50.00th=[55837], 60.00th=[57934], 00:18:22.412 | 70.00th=[60031], 80.00th=[62653], 90.00th=[65799], 95.00th=[68682], 00:18:22.412 | 99.00th=[74974], 99.50th=[77071], 99.90th=[83362], 99.95th=[85459], 00:18:22.412 | 99.99th=[93848] 00:18:22.412 bw ( KiB/s): min=267264, max=308119, per=15.39%, avg=288097.15, stdev=12944.23, samples=20 00:18:22.412 iops : min= 1044, max= 1203, avg=1125.35, stdev=50.52, samples=20 00:18:22.412 lat (msec) : 20=0.04%, 50=24.30%, 100=75.66% 00:18:22.412 cpu : usr=0.45%, sys=3.29%, ctx=2068, majf=0, minf=4097 00:18:22.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:22.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.412 issued rwts: total=11318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.412 job3: (groupid=0, jobs=1): err= 0: pid=90232: Mon Jul 15 02:20:19 2024 00:18:22.412 read: IOPS=525, BW=131MiB/s (138MB/s)(1330MiB/10118msec) 00:18:22.412 slat (usec): min=21, max=86648, avg=1867.17, stdev=7042.81 00:18:22.412 clat (msec): min=21, max=269, avg=119.65, stdev=32.46 00:18:22.412 lat (msec): min=21, max=269, avg=121.52, stdev=33.52 00:18:22.412 clat percentiles (msec): 00:18:22.412 | 1.00th=[ 37], 5.00th=[ 53], 10.00th=[ 60], 20.00th=[ 109], 00:18:22.412 | 30.00th=[ 120], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 132], 00:18:22.412 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 146], 95.00th=[ 153], 00:18:22.412 | 99.00th=[ 186], 99.50th=[ 224], 99.90th=[ 234], 99.95th=[ 271], 00:18:22.412 | 99.99th=[ 271] 00:18:22.412 bw ( KiB/s): min=109568, max=268825, per=7.19%, avg=134559.10, stdev=40377.99, samples=20 00:18:22.412 iops : min= 428, max= 1050, avg=525.55, stdev=157.69, samples=20 00:18:22.412 lat (msec) : 50=3.50%, 100=13.61%, 250=82.80%, 500=0.09% 00:18:22.412 cpu : usr=0.14%, sys=1.78%, ctx=1157, majf=0, minf=4097 00:18:22.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:22.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.412 issued rwts: total=5320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.412 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.412 job4: (groupid=0, jobs=1): err= 0: pid=90233: Mon Jul 15 02:20:19 2024 00:18:22.412 read: IOPS=477, BW=119MiB/s (125MB/s)(1208MiB/10118msec) 00:18:22.412 slat (usec): min=18, max=95146, avg=2065.09, stdev=7825.48 00:18:22.412 clat (msec): min=21, max=240, avg=131.63, stdev=19.48 00:18:22.412 lat (msec): min=23, max=241, avg=133.69, stdev=20.92 00:18:22.412 clat percentiles (msec): 00:18:22.412 | 1.00th=[ 68], 5.00th=[ 107], 10.00th=[ 113], 20.00th=[ 120], 00:18:22.412 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 136], 00:18:22.412 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 153], 95.00th=[ 159], 00:18:22.412 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 241], 99.95th=[ 241], 00:18:22.412 | 99.99th=[ 241] 00:18:22.412 bw ( KiB/s): min=96256, max=150016, per=6.52%, avg=122048.45, stdev=13151.67, samples=20 00:18:22.412 iops : min= 376, max= 586, avg=476.70, stdev=51.31, samples=20 00:18:22.412 lat (msec) : 50=0.62%, 100=2.26%, 250=97.12% 00:18:22.412 cpu : usr=0.23%, sys=1.61%, ctx=842, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job5: (groupid=0, jobs=1): err= 0: pid=90234: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=573, BW=143MiB/s (150MB/s)(1445MiB/10083msec) 00:18:22.413 slat (usec): min=21, max=74546, avg=1726.70, stdev=5879.54 00:18:22.413 clat (msec): min=14, max=197, avg=109.75, stdev=14.50 00:18:22.413 lat (msec): min=14, max=197, avg=111.48, stdev=15.49 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 95], 20.00th=[ 102], 00:18:22.413 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:18:22.413 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 128], 00:18:22.413 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 199], 99.95th=[ 199], 00:18:22.413 | 99.99th=[ 199] 00:18:22.413 bw ( KiB/s): min=132360, max=177152, per=7.81%, avg=146323.65, stdev=10267.47, samples=20 00:18:22.413 iops : min= 517, max= 692, avg=571.50, stdev=40.14, samples=20 00:18:22.413 lat (msec) : 20=0.21%, 50=0.42%, 100=18.07%, 250=81.31% 00:18:22.413 cpu : usr=0.23%, sys=2.24%, ctx=1125, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=5778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job6: (groupid=0, jobs=1): err= 0: pid=90235: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=468, BW=117MiB/s (123MB/s)(1183MiB/10103msec) 00:18:22.413 slat (usec): min=18, max=75205, avg=2110.83, stdev=6856.85 00:18:22.413 clat (msec): min=33, max=238, avg=134.29, stdev=15.81 00:18:22.413 lat (msec): min=33, max=238, avg=136.40, stdev=17.06 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 122], 00:18:22.413 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 136], 60.00th=[ 138], 00:18:22.413 | 70.00th=[ 144], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 159], 00:18:22.413 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 224], 99.95th=[ 224], 00:18:22.413 | 99.99th=[ 239] 00:18:22.413 bw ( KiB/s): min=104239, max=142848, per=6.38%, avg=119506.85, stdev=10095.33, samples=20 00:18:22.413 iops : min= 407, max= 558, avg=466.80, stdev=39.45, samples=20 00:18:22.413 lat (msec) : 50=0.11%, 100=0.08%, 250=99.81% 00:18:22.413 cpu : usr=0.19%, sys=1.87%, ctx=904, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=4733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job7: (groupid=0, jobs=1): err= 0: pid=90236: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=471, BW=118MiB/s (124MB/s)(1192MiB/10108msec) 00:18:22.413 slat (usec): min=20, max=81017, avg=2097.81, stdev=7467.90 00:18:22.413 clat (msec): min=31, max=217, avg=133.33, stdev=16.03 00:18:22.413 lat (msec): min=31, max=225, avg=135.42, stdev=17.67 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 93], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 122], 00:18:22.413 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 134], 60.00th=[ 138], 00:18:22.413 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 155], 00:18:22.413 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 218], 99.95th=[ 218], 00:18:22.413 | 99.99th=[ 218] 00:18:22.413 bw ( KiB/s): min=108761, max=138752, per=6.43%, avg=120441.60, stdev=9056.43, samples=20 00:18:22.413 iops : min= 424, max= 542, avg=470.20, stdev=35.43, samples=20 00:18:22.413 lat (msec) : 50=0.38%, 100=1.15%, 250=98.47% 00:18:22.413 cpu : usr=0.14%, sys=1.49%, ctx=1044, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=4767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job8: (groupid=0, jobs=1): err= 0: pid=90237: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=470, BW=118MiB/s (123MB/s)(1190MiB/10119msec) 00:18:22.413 slat (usec): min=15, max=72130, avg=2065.15, stdev=6813.76 00:18:22.413 clat (msec): min=19, max=240, avg=133.78, stdev=20.48 00:18:22.413 lat (msec): min=19, max=240, avg=135.85, stdev=21.61 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 43], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 120], 00:18:22.413 | 30.00th=[ 127], 40.00th=[ 132], 50.00th=[ 138], 60.00th=[ 142], 00:18:22.413 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:18:22.413 | 99.00th=[ 171], 99.50th=[ 190], 99.90th=[ 215], 99.95th=[ 215], 00:18:22.413 | 99.99th=[ 241] 00:18:22.413 bw ( KiB/s): min=104960, max=145920, per=6.42%, avg=120181.15, stdev=12917.11, samples=20 00:18:22.413 iops : min= 410, max= 570, avg=469.45, stdev=50.46, samples=20 00:18:22.413 lat (msec) : 20=0.08%, 50=1.28%, 100=1.26%, 250=97.37% 00:18:22.413 cpu : usr=0.23%, sys=1.54%, ctx=929, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job9: (groupid=0, jobs=1): err= 0: pid=90238: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=569, BW=142MiB/s (149MB/s)(1438MiB/10091msec) 00:18:22.413 slat (usec): min=18, max=59624, avg=1739.31, stdev=5664.29 00:18:22.413 clat (msec): min=13, max=190, avg=110.38, stdev=15.91 00:18:22.413 lat (msec): min=13, max=190, avg=112.12, stdev=16.82 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 37], 5.00th=[ 87], 10.00th=[ 94], 20.00th=[ 103], 00:18:22.413 | 30.00th=[ 107], 40.00th=[ 109], 50.00th=[ 112], 60.00th=[ 115], 00:18:22.413 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 131], 00:18:22.413 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 169], 99.95th=[ 190], 00:18:22.413 | 99.99th=[ 190] 00:18:22.413 bw ( KiB/s): min=129024, max=189819, per=7.77%, avg=145533.50, stdev=13328.76, samples=20 00:18:22.413 iops : min= 504, max= 741, avg=568.35, stdev=51.88, samples=20 00:18:22.413 lat (msec) : 20=0.16%, 50=1.39%, 100=15.30%, 250=83.15% 00:18:22.413 cpu : usr=0.23%, sys=2.05%, ctx=980, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=5750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 job10: (groupid=0, jobs=1): err= 0: pid=90239: Mon Jul 15 02:20:19 2024 00:18:22.413 read: IOPS=1115, BW=279MiB/s (292MB/s)(2799MiB/10041msec) 00:18:22.413 slat (usec): min=21, max=31078, avg=887.81, stdev=3271.42 00:18:22.413 clat (msec): min=11, max=100, avg=56.39, stdev= 8.62 00:18:22.413 lat (msec): min=11, max=101, avg=57.28, stdev= 8.96 00:18:22.413 clat percentiles (msec): 00:18:22.413 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:18:22.413 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 59], 00:18:22.413 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 70], 00:18:22.413 | 99.00th=[ 78], 99.50th=[ 82], 99.90th=[ 101], 99.95th=[ 101], 00:18:22.413 | 99.99th=[ 102] 00:18:22.413 bw ( KiB/s): min=256512, max=308224, per=15.22%, avg=285026.60, stdev=15276.88, samples=20 00:18:22.413 iops : min= 1002, max= 1204, avg=1113.35, stdev=59.66, samples=20 00:18:22.413 lat (msec) : 20=0.12%, 50=20.84%, 100=78.92%, 250=0.12% 00:18:22.413 cpu : usr=0.43%, sys=3.82%, ctx=2362, majf=0, minf=4097 00:18:22.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:22.413 issued rwts: total=11197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.413 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:22.413 00:18:22.413 Run status group 0 (all jobs): 00:18:22.413 READ: bw=1829MiB/s (1917MB/s), 117MiB/s-282MiB/s (123MB/s-296MB/s), io=18.1GiB (19.4GB), run=10031-10119msec 00:18:22.413 00:18:22.413 Disk stats (read/write): 00:18:22.413 nvme0n1: ios=19794/0, merge=0/0, ticks=1229408/0, in_queue=1229408, util=97.41% 00:18:22.413 nvme10n1: ios=11085/0, merge=0/0, ticks=1241247/0, in_queue=1241247, util=97.89% 00:18:22.413 nvme1n1: ios=22508/0, merge=0/0, ticks=1232641/0, in_queue=1232641, util=97.58% 00:18:22.413 nvme2n1: ios=10524/0, merge=0/0, ticks=1234797/0, in_queue=1234797, util=98.06% 00:18:22.413 nvme3n1: ios=9555/0, merge=0/0, ticks=1239008/0, in_queue=1239008, util=98.22% 00:18:22.413 nvme4n1: ios=11429/0, merge=0/0, ticks=1236686/0, in_queue=1236686, util=98.43% 00:18:22.413 nvme5n1: ios=9339/0, merge=0/0, ticks=1242429/0, in_queue=1242429, util=98.36% 00:18:22.413 nvme6n1: ios=9407/0, merge=0/0, ticks=1238891/0, in_queue=1238891, util=98.23% 00:18:22.413 nvme7n1: ios=9392/0, merge=0/0, ticks=1241371/0, in_queue=1241371, util=98.72% 00:18:22.413 nvme8n1: ios=11372/0, merge=0/0, ticks=1241472/0, in_queue=1241472, util=99.06% 00:18:22.413 nvme9n1: ios=22334/0, merge=0/0, ticks=1230310/0, in_queue=1230310, util=98.51% 00:18:22.413 02:20:19 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:22.413 [global] 00:18:22.413 thread=1 00:18:22.413 invalidate=1 00:18:22.413 rw=randwrite 00:18:22.413 time_based=1 00:18:22.413 runtime=10 00:18:22.413 ioengine=libaio 00:18:22.413 direct=1 00:18:22.413 bs=262144 00:18:22.413 iodepth=64 00:18:22.413 norandommap=1 00:18:22.413 numjobs=1 00:18:22.413 00:18:22.413 [job0] 00:18:22.413 filename=/dev/nvme0n1 00:18:22.413 [job1] 00:18:22.413 filename=/dev/nvme10n1 00:18:22.413 [job2] 00:18:22.413 filename=/dev/nvme1n1 00:18:22.413 [job3] 00:18:22.413 filename=/dev/nvme2n1 00:18:22.413 [job4] 00:18:22.413 filename=/dev/nvme3n1 00:18:22.413 [job5] 00:18:22.413 filename=/dev/nvme4n1 00:18:22.413 [job6] 00:18:22.413 filename=/dev/nvme5n1 00:18:22.413 [job7] 00:18:22.413 filename=/dev/nvme6n1 00:18:22.413 [job8] 00:18:22.413 filename=/dev/nvme7n1 00:18:22.413 [job9] 00:18:22.413 filename=/dev/nvme8n1 00:18:22.413 [job10] 00:18:22.413 filename=/dev/nvme9n1 00:18:22.413 Could not set queue depth (nvme0n1) 00:18:22.413 Could not set queue depth (nvme10n1) 00:18:22.413 Could not set queue depth (nvme1n1) 00:18:22.413 Could not set queue depth (nvme2n1) 00:18:22.413 Could not set queue depth (nvme3n1) 00:18:22.414 Could not set queue depth (nvme4n1) 00:18:22.414 Could not set queue depth (nvme5n1) 00:18:22.414 Could not set queue depth (nvme6n1) 00:18:22.414 Could not set queue depth (nvme7n1) 00:18:22.414 Could not set queue depth (nvme8n1) 00:18:22.414 Could not set queue depth (nvme9n1) 00:18:22.414 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:22.414 fio-3.35 00:18:22.414 Starting 11 threads 00:18:32.385 00:18:32.385 job0: (groupid=0, jobs=1): err= 0: pid=90439: Mon Jul 15 02:20:30 2024 00:18:32.385 write: IOPS=1567, BW=392MiB/s (411MB/s)(3932MiB/10036msec); 0 zone resets 00:18:32.385 slat (usec): min=15, max=6371, avg=631.72, stdev=1045.36 00:18:32.385 clat (usec): min=5947, max=73959, avg=40180.64, stdev=2181.80 00:18:32.385 lat (usec): min=5996, max=74042, avg=40812.36, stdev=2200.41 00:18:32.385 clat percentiles (usec): 00:18:32.385 | 1.00th=[37487], 5.00th=[38011], 10.00th=[38536], 20.00th=[39060], 00:18:32.385 | 30.00th=[39584], 40.00th=[39584], 50.00th=[40109], 60.00th=[40633], 00:18:32.386 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:18:32.386 | 99.00th=[43254], 99.50th=[43779], 99.90th=[63701], 99.95th=[68682], 00:18:32.386 | 99.99th=[73925] 00:18:32.386 bw ( KiB/s): min=391168, max=409088, per=23.85%, avg=401049.60, stdev=4736.75, samples=20 00:18:32.386 iops : min= 1528, max= 1598, avg=1566.60, stdev=18.50, samples=20 00:18:32.386 lat (msec) : 10=0.07%, 20=0.11%, 50=99.58%, 100=0.24% 00:18:32.386 cpu : usr=2.33%, sys=3.20%, ctx=20943, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,15729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job1: (groupid=0, jobs=1): err= 0: pid=90441: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=361, BW=90.3MiB/s (94.7MB/s)(918MiB/10158msec); 0 zone resets 00:18:32.386 slat (usec): min=20, max=26908, avg=2722.22, stdev=4797.88 00:18:32.386 clat (msec): min=16, max=331, avg=174.32, stdev=30.19 00:18:32.386 lat (msec): min=16, max=331, avg=177.04, stdev=30.29 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 70], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 174], 00:18:32.386 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:18:32.386 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 194], 00:18:32.386 | 99.00th=[ 228], 99.50th=[ 275], 99.90th=[ 321], 99.95th=[ 330], 00:18:32.386 | 99.99th=[ 330] 00:18:32.386 bw ( KiB/s): min=86016, max=137490, per=5.49%, avg=92327.30, stdev=14518.82, samples=20 00:18:32.386 iops : min= 336, max= 537, avg=360.65, stdev=56.70, samples=20 00:18:32.386 lat (msec) : 20=0.11%, 50=0.54%, 100=1.01%, 250=97.63%, 500=0.71% 00:18:32.386 cpu : usr=0.72%, sys=0.99%, ctx=4024, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,3670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job2: (groupid=0, jobs=1): err= 0: pid=90451: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=370, BW=92.5MiB/s (97.0MB/s)(939MiB/10146msec); 0 zone resets 00:18:32.386 slat (usec): min=19, max=20923, avg=2631.58, stdev=4668.98 00:18:32.386 clat (msec): min=12, max=332, avg=170.23, stdev=33.34 00:18:32.386 lat (msec): min=13, max=332, avg=172.86, stdev=33.59 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 60], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 169], 00:18:32.386 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:18:32.386 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 190], 95.00th=[ 192], 00:18:32.386 | 99.00th=[ 224], 99.50th=[ 271], 99.90th=[ 321], 99.95th=[ 334], 00:18:32.386 | 99.99th=[ 334] 00:18:32.386 bw ( KiB/s): min=86016, max=149504, per=5.62%, avg=94515.20, stdev=18777.14, samples=20 00:18:32.386 iops : min= 336, max= 584, avg=369.20, stdev=73.35, samples=20 00:18:32.386 lat (msec) : 20=0.11%, 50=0.75%, 100=2.50%, 250=95.95%, 500=0.69% 00:18:32.386 cpu : usr=0.85%, sys=1.30%, ctx=3794, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,3755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job3: (groupid=0, jobs=1): err= 0: pid=90452: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=361, BW=90.3MiB/s (94.7MB/s)(917MiB/10155msec); 0 zone resets 00:18:32.386 slat (usec): min=22, max=27988, avg=2722.16, stdev=4830.74 00:18:32.386 clat (msec): min=11, max=341, avg=174.39, stdev=31.29 00:18:32.386 lat (msec): min=11, max=341, avg=177.11, stdev=31.42 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 70], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 171], 00:18:32.386 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:18:32.386 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 199], 00:18:32.386 | 99.00th=[ 236], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 342], 00:18:32.386 | 99.99th=[ 342] 00:18:32.386 bw ( KiB/s): min=83968, max=141824, per=5.49%, avg=92288.00, stdev=15147.33, samples=20 00:18:32.386 iops : min= 328, max= 554, avg=360.50, stdev=59.17, samples=20 00:18:32.386 lat (msec) : 20=0.14%, 50=0.44%, 100=1.04%, 250=97.57%, 500=0.82% 00:18:32.386 cpu : usr=0.80%, sys=0.87%, ctx=2375, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,3668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job4: (groupid=0, jobs=1): err= 0: pid=90453: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=424, BW=106MiB/s (111MB/s)(1076MiB/10130msec); 0 zone resets 00:18:32.386 slat (usec): min=23, max=56277, avg=2318.67, stdev=4059.04 00:18:32.386 clat (msec): min=5, max=275, avg=148.27, stdev=15.73 00:18:32.386 lat (msec): min=5, max=275, avg=150.59, stdev=15.44 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 66], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:18:32.386 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:18:32.386 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 153], 95.00th=[ 157], 00:18:32.386 | 99.00th=[ 180], 99.50th=[ 228], 99.90th=[ 266], 99.95th=[ 266], 00:18:32.386 | 99.99th=[ 275] 00:18:32.386 bw ( KiB/s): min=106496, max=111104, per=6.45%, avg=108544.00, stdev=1220.69, samples=20 00:18:32.386 iops : min= 416, max= 434, avg=424.00, stdev= 4.77, samples=20 00:18:32.386 lat (msec) : 10=0.21%, 20=0.19%, 50=0.37%, 100=0.46%, 250=98.54% 00:18:32.386 lat (msec) : 500=0.23% 00:18:32.386 cpu : usr=0.99%, sys=1.36%, ctx=3460, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,4303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job5: (groupid=0, jobs=1): err= 0: pid=90454: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=362, BW=90.7MiB/s (95.1MB/s)(922MiB/10162msec); 0 zone resets 00:18:32.386 slat (usec): min=22, max=28432, avg=2710.54, stdev=4787.41 00:18:32.386 clat (msec): min=3, max=334, avg=173.65, stdev=31.60 00:18:32.386 lat (msec): min=3, max=334, avg=176.36, stdev=31.73 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 65], 5.00th=[ 109], 10.00th=[ 115], 20.00th=[ 171], 00:18:32.386 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:18:32.386 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 199], 00:18:32.386 | 99.00th=[ 228], 99.50th=[ 279], 99.90th=[ 326], 99.95th=[ 334], 00:18:32.386 | 99.99th=[ 334] 00:18:32.386 bw ( KiB/s): min=81920, max=146725, per=5.51%, avg=92737.85, stdev=15816.78, samples=20 00:18:32.386 iops : min= 320, max= 573, avg=362.25, stdev=61.76, samples=20 00:18:32.386 lat (msec) : 4=0.05%, 10=0.11%, 20=0.11%, 50=0.43%, 100=1.09% 00:18:32.386 lat (msec) : 250=97.40%, 500=0.81% 00:18:32.386 cpu : usr=0.81%, sys=1.10%, ctx=2827, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,3686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job6: (groupid=0, jobs=1): err= 0: pid=90455: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=423, BW=106MiB/s (111MB/s)(1072MiB/10125msec); 0 zone resets 00:18:32.386 slat (usec): min=22, max=53361, avg=2327.04, stdev=4061.36 00:18:32.386 clat (msec): min=5, max=264, avg=148.69, stdev=12.38 00:18:32.386 lat (msec): min=5, max=264, avg=151.01, stdev=11.88 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 125], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:18:32.386 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:18:32.386 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 153], 95.00th=[ 155], 00:18:32.386 | 99.00th=[ 174], 99.50th=[ 218], 99.90th=[ 255], 99.95th=[ 266], 00:18:32.386 | 99.99th=[ 266] 00:18:32.386 bw ( KiB/s): min=98816, max=110592, per=6.43%, avg=108185.60, stdev=2349.80, samples=20 00:18:32.386 iops : min= 386, max= 432, avg=422.60, stdev= 9.18, samples=20 00:18:32.386 lat (msec) : 10=0.02%, 20=0.09%, 50=0.28%, 100=0.19%, 250=99.25% 00:18:32.386 lat (msec) : 500=0.16% 00:18:32.386 cpu : usr=0.81%, sys=1.07%, ctx=6460, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,4289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job7: (groupid=0, jobs=1): err= 0: pid=90456: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=425, BW=106MiB/s (112MB/s)(1077MiB/10123msec); 0 zone resets 00:18:32.386 slat (usec): min=19, max=35464, avg=2317.57, stdev=4010.34 00:18:32.386 clat (msec): min=10, max=267, avg=148.06, stdev=14.22 00:18:32.386 lat (msec): min=10, max=267, avg=150.37, stdev=13.88 00:18:32.386 clat percentiles (msec): 00:18:32.386 | 1.00th=[ 82], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:18:32.386 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:18:32.386 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 153], 95.00th=[ 155], 00:18:32.386 | 99.00th=[ 165], 99.50th=[ 222], 99.90th=[ 257], 99.95th=[ 259], 00:18:32.386 | 99.99th=[ 268] 00:18:32.386 bw ( KiB/s): min=104448, max=113664, per=6.46%, avg=108646.40, stdev=1905.62, samples=20 00:18:32.386 iops : min= 408, max= 444, avg=424.40, stdev= 7.44, samples=20 00:18:32.386 lat (msec) : 20=0.14%, 50=0.37%, 100=0.74%, 250=98.61%, 500=0.14% 00:18:32.386 cpu : usr=1.05%, sys=1.08%, ctx=3760, majf=0, minf=1 00:18:32.386 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:32.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.386 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.386 issued rwts: total=0,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.386 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.386 job8: (groupid=0, jobs=1): err= 0: pid=90457: Mon Jul 15 02:20:30 2024 00:18:32.386 write: IOPS=424, BW=106MiB/s (111MB/s)(1075MiB/10122msec); 0 zone resets 00:18:32.386 slat (usec): min=22, max=39972, avg=2322.04, stdev=4009.95 00:18:32.387 clat (msec): min=42, max=267, avg=148.31, stdev=11.74 00:18:32.387 lat (msec): min=42, max=268, avg=150.63, stdev=11.23 00:18:32.387 clat percentiles (msec): 00:18:32.387 | 1.00th=[ 109], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:18:32.387 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:18:32.387 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 153], 95.00th=[ 155], 00:18:32.387 | 99.00th=[ 165], 99.50th=[ 222], 99.90th=[ 259], 99.95th=[ 259], 00:18:32.387 | 99.99th=[ 268] 00:18:32.387 bw ( KiB/s): min=106196, max=110592, per=6.45%, avg=108452.20, stdev=1228.51, samples=20 00:18:32.387 iops : min= 414, max= 432, avg=423.60, stdev= 4.88, samples=20 00:18:32.387 lat (msec) : 50=0.07%, 100=0.74%, 250=99.05%, 500=0.14% 00:18:32.387 cpu : usr=0.90%, sys=1.06%, ctx=5578, majf=0, minf=1 00:18:32.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:18:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.387 issued rwts: total=0,4299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.387 job9: (groupid=0, jobs=1): err= 0: pid=90458: Mon Jul 15 02:20:30 2024 00:18:32.387 write: IOPS=1520, BW=380MiB/s (399MB/s)(3815MiB/10035msec); 0 zone resets 00:18:32.387 slat (usec): min=19, max=28975, avg=651.06, stdev=1168.54 00:18:32.387 clat (msec): min=6, max=134, avg=41.42, stdev=13.02 00:18:32.387 lat (msec): min=7, max=134, avg=42.07, stdev=13.21 00:18:32.387 clat percentiles (msec): 00:18:32.387 | 1.00th=[ 37], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], 00:18:32.387 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:18:32.387 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 42], 95.00th=[ 42], 00:18:32.387 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 127], 99.95th=[ 132], 00:18:32.387 | 99.99th=[ 136] 00:18:32.387 bw ( KiB/s): min=146432, max=418304, per=23.13%, avg=389017.60, stdev=78238.74, samples=20 00:18:32.387 iops : min= 572, max= 1634, avg=1519.60, stdev=305.62, samples=20 00:18:32.387 lat (msec) : 10=0.06%, 20=0.06%, 50=95.91%, 100=0.82%, 250=3.15% 00:18:32.387 cpu : usr=2.36%, sys=3.30%, ctx=19621, majf=0, minf=1 00:18:32.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.387 issued rwts: total=0,15259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.387 job10: (groupid=0, jobs=1): err= 0: pid=90459: Mon Jul 15 02:20:30 2024 00:18:32.387 write: IOPS=373, BW=93.3MiB/s (97.8MB/s)(947MiB/10154msec); 0 zone resets 00:18:32.387 slat (usec): min=20, max=32083, avg=2615.17, stdev=4625.43 00:18:32.387 clat (msec): min=8, max=340, avg=168.77, stdev=31.69 00:18:32.387 lat (msec): min=8, max=340, avg=171.38, stdev=31.87 00:18:32.387 clat percentiles (msec): 00:18:32.387 | 1.00th=[ 75], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 167], 00:18:32.387 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:18:32.387 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 186], 95.00th=[ 188], 00:18:32.387 | 99.00th=[ 239], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 342], 00:18:32.387 | 99.99th=[ 342] 00:18:32.387 bw ( KiB/s): min=84480, max=151040, per=5.67%, avg=95385.60, stdev=17646.59, samples=20 00:18:32.387 iops : min= 330, max= 590, avg=372.60, stdev=68.93, samples=20 00:18:32.387 lat (msec) : 10=0.18%, 20=0.13%, 50=0.11%, 100=2.19%, 250=96.60% 00:18:32.387 lat (msec) : 500=0.79% 00:18:32.387 cpu : usr=0.77%, sys=0.95%, ctx=4553, majf=0, minf=1 00:18:32.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:18:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:32.387 issued rwts: total=0,3789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.387 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:32.387 00:18:32.387 Run status group 0 (all jobs): 00:18:32.387 WRITE: bw=1642MiB/s (1722MB/s), 90.3MiB/s-392MiB/s (94.7MB/s-411MB/s), io=16.3GiB (17.5GB), run=10035-10162msec 00:18:32.387 00:18:32.387 Disk stats (read/write): 00:18:32.387 nvme0n1: ios=49/31184, merge=0/0, ticks=23/1214441, in_queue=1214464, util=97.48% 00:18:32.387 nvme10n1: ios=49/7188, merge=0/0, ticks=26/1207638, in_queue=1207664, util=97.73% 00:18:32.387 nvme1n1: ios=18/7355, merge=0/0, ticks=135/1206530, in_queue=1206665, util=98.09% 00:18:32.387 nvme2n1: ios=0/7187, merge=0/0, ticks=0/1206929, in_queue=1206929, util=97.98% 00:18:32.387 nvme3n1: ios=0/8456, merge=0/0, ticks=0/1210210, in_queue=1210210, util=98.12% 00:18:32.387 nvme4n1: ios=0/7223, merge=0/0, ticks=0/1208276, in_queue=1208276, util=98.32% 00:18:32.387 nvme5n1: ios=0/8415, merge=0/0, ticks=0/1208910, in_queue=1208910, util=98.33% 00:18:32.387 nvme6n1: ios=0/8451, merge=0/0, ticks=0/1208046, in_queue=1208046, util=98.38% 00:18:32.387 nvme7n1: ios=0/8436, merge=0/0, ticks=0/1207837, in_queue=1207837, util=98.64% 00:18:32.387 nvme8n1: ios=0/30307, merge=0/0, ticks=0/1217710, in_queue=1217710, util=98.95% 00:18:32.387 nvme9n1: ios=0/7433, merge=0/0, ticks=0/1207620, in_queue=1207620, util=98.95% 00:18:32.387 02:20:30 -- target/multiconnection.sh@36 -- # sync 00:18:32.387 02:20:30 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:32.387 02:20:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.387 02:20:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:32.387 02:20:30 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:32.387 02:20:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:32.387 02:20:30 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.387 02:20:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:30 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.387 02:20:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:32.387 02:20:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:32.387 02:20:30 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:32.387 02:20:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:32.387 02:20:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:30 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:32.387 02:20:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:30 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.387 02:20:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:32.387 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:32.387 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:32.387 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.387 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:32.387 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:32.387 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:32.387 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:32.387 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.387 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:32.387 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:32.387 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:32.387 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:32.387 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.387 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.387 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:32.387 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:32.387 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:32.387 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.387 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:32.387 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.387 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:32.387 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.387 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.387 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.388 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.388 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:32.388 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:32.388 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:32.388 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:32.388 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.388 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:32.388 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.388 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.388 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.388 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.388 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:32.388 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:32.388 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:32.388 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:32.388 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.388 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:32.388 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.388 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.388 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.388 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.388 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:32.388 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:32.388 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:32.388 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.388 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:32.388 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.388 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.388 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.388 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.388 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:32.388 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:32.388 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:32.388 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:32.388 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.388 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:32.388 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.388 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.388 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.388 02:20:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.388 02:20:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:32.388 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:32.388 02:20:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:32.388 02:20:31 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.388 02:20:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:32.388 02:20:31 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.388 02:20:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:32.388 02:20:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:32.388 02:20:31 -- common/autotest_common.sh@10 -- # set +x 00:18:32.646 02:20:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:32.646 02:20:31 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:32.646 02:20:31 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:32.646 02:20:31 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:32.646 02:20:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:32.646 02:20:31 -- nvmf/common.sh@116 -- # sync 00:18:32.646 02:20:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:32.646 02:20:31 -- nvmf/common.sh@119 -- # set +e 00:18:32.646 02:20:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:32.646 02:20:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:32.646 rmmod nvme_tcp 00:18:32.646 rmmod nvme_fabrics 00:18:32.646 rmmod nvme_keyring 00:18:32.646 02:20:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:32.646 02:20:31 -- nvmf/common.sh@123 -- # set -e 00:18:32.646 02:20:31 -- nvmf/common.sh@124 -- # return 0 00:18:32.646 02:20:31 -- nvmf/common.sh@477 -- # '[' -n 89753 ']' 00:18:32.646 02:20:31 -- nvmf/common.sh@478 -- # killprocess 89753 00:18:32.646 02:20:31 -- common/autotest_common.sh@926 -- # '[' -z 89753 ']' 00:18:32.646 02:20:31 -- common/autotest_common.sh@930 -- # kill -0 89753 00:18:32.646 02:20:31 -- common/autotest_common.sh@931 -- # uname 00:18:32.646 02:20:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.646 02:20:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89753 00:18:32.646 killing process with pid 89753 00:18:32.646 02:20:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:32.646 02:20:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:32.646 02:20:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89753' 00:18:32.646 02:20:32 -- common/autotest_common.sh@945 -- # kill 89753 00:18:32.646 02:20:32 -- common/autotest_common.sh@950 -- # wait 89753 00:18:33.214 02:20:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:33.214 02:20:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:33.214 02:20:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:33.214 02:20:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.214 02:20:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.214 02:20:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.214 02:20:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:33.214 00:18:33.214 real 0m49.572s 00:18:33.214 user 2m46.169s 00:18:33.214 sys 0m25.705s 00:18:33.214 02:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.214 ************************************ 00:18:33.214 02:20:32 -- common/autotest_common.sh@10 -- # set +x 00:18:33.214 END TEST nvmf_multiconnection 00:18:33.214 ************************************ 00:18:33.214 02:20:32 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:33.214 02:20:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:33.214 02:20:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:33.214 02:20:32 -- common/autotest_common.sh@10 -- # set +x 00:18:33.214 ************************************ 00:18:33.214 START TEST nvmf_initiator_timeout 00:18:33.214 ************************************ 00:18:33.214 02:20:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:33.214 * Looking for test storage... 00:18:33.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:33.214 02:20:32 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.214 02:20:32 -- nvmf/common.sh@7 -- # uname -s 00:18:33.214 02:20:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.214 02:20:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.214 02:20:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.214 02:20:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.214 02:20:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.214 02:20:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.214 02:20:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.214 02:20:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.214 02:20:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.214 02:20:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:18:33.214 02:20:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:18:33.214 02:20:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.214 02:20:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.214 02:20:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.214 02:20:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.214 02:20:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.214 02:20:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.214 02:20:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.214 02:20:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.214 02:20:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.214 02:20:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.214 02:20:32 -- paths/export.sh@5 -- # export PATH 00:18:33.214 02:20:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.214 02:20:32 -- nvmf/common.sh@46 -- # : 0 00:18:33.214 02:20:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:33.214 02:20:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:33.214 02:20:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:33.214 02:20:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.214 02:20:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.214 02:20:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:33.214 02:20:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:33.214 02:20:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:33.214 02:20:32 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.214 02:20:32 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.214 02:20:32 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:33.214 02:20:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:33.214 02:20:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.214 02:20:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:33.214 02:20:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:33.214 02:20:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:33.214 02:20:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.214 02:20:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.214 02:20:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.214 02:20:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:33.214 02:20:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:33.214 02:20:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.214 02:20:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.214 02:20:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.214 02:20:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:33.214 02:20:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.214 02:20:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.214 02:20:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.214 02:20:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.214 02:20:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.214 02:20:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.214 02:20:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.214 02:20:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.214 02:20:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:33.214 02:20:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:33.214 Cannot find device "nvmf_tgt_br" 00:18:33.214 02:20:32 -- nvmf/common.sh@154 -- # true 00:18:33.214 02:20:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.214 Cannot find device "nvmf_tgt_br2" 00:18:33.214 02:20:32 -- nvmf/common.sh@155 -- # true 00:18:33.214 02:20:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:33.214 02:20:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:33.214 Cannot find device "nvmf_tgt_br" 00:18:33.214 02:20:32 -- nvmf/common.sh@157 -- # true 00:18:33.214 02:20:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:33.473 Cannot find device "nvmf_tgt_br2" 00:18:33.473 02:20:32 -- nvmf/common.sh@158 -- # true 00:18:33.473 02:20:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:33.473 02:20:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:33.473 02:20:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.473 02:20:32 -- nvmf/common.sh@161 -- # true 00:18:33.473 02:20:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.473 02:20:32 -- nvmf/common.sh@162 -- # true 00:18:33.473 02:20:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:33.473 02:20:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:33.473 02:20:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:33.473 02:20:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:33.473 02:20:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:33.473 02:20:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:33.473 02:20:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:33.473 02:20:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:33.473 02:20:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:33.473 02:20:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:33.473 02:20:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:33.473 02:20:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:33.473 02:20:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:33.473 02:20:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:33.473 02:20:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:33.473 02:20:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:33.473 02:20:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:33.473 02:20:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:33.474 02:20:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:33.474 02:20:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:33.474 02:20:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:33.474 02:20:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:33.474 02:20:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:33.474 02:20:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:33.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:33.474 00:18:33.474 --- 10.0.0.2 ping statistics --- 00:18:33.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.474 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:33.474 02:20:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:33.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:33.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:33.474 00:18:33.474 --- 10.0.0.3 ping statistics --- 00:18:33.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.474 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:33.474 02:20:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:33.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:33.474 00:18:33.474 --- 10.0.0.1 ping statistics --- 00:18:33.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.474 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:33.474 02:20:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.474 02:20:33 -- nvmf/common.sh@421 -- # return 0 00:18:33.474 02:20:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:33.474 02:20:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.474 02:20:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:33.474 02:20:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:33.474 02:20:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.474 02:20:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:33.474 02:20:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:33.474 02:20:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:33.474 02:20:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:33.474 02:20:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:33.474 02:20:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.474 02:20:33 -- nvmf/common.sh@469 -- # nvmfpid=90823 00:18:33.474 02:20:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:33.474 02:20:33 -- nvmf/common.sh@470 -- # waitforlisten 90823 00:18:33.474 02:20:33 -- common/autotest_common.sh@819 -- # '[' -z 90823 ']' 00:18:33.733 02:20:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.733 02:20:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:33.733 02:20:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.733 02:20:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:33.733 02:20:33 -- common/autotest_common.sh@10 -- # set +x 00:18:33.733 [2024-07-15 02:20:33.084443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:33.733 [2024-07-15 02:20:33.084542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.733 [2024-07-15 02:20:33.218588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.990 [2024-07-15 02:20:33.307786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.990 [2024-07-15 02:20:33.307919] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.990 [2024-07-15 02:20:33.307932] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.990 [2024-07-15 02:20:33.307941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.990 [2024-07-15 02:20:33.308210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.990 [2024-07-15 02:20:33.310664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.990 [2024-07-15 02:20:33.312661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.990 [2024-07-15 02:20:33.312695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.554 02:20:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:34.554 02:20:34 -- common/autotest_common.sh@852 -- # return 0 00:18:34.554 02:20:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:34.554 02:20:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:34.554 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.554 02:20:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.554 02:20:34 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:34.554 02:20:34 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.554 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.554 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 Malloc0 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:34.811 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.811 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 Delay0 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.811 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.811 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 [2024-07-15 02:20:34.155409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:34.811 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.811 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:34.811 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.811 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.811 02:20:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.811 02:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:34.811 [2024-07-15 02:20:34.183565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.811 02:20:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.811 02:20:34 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.811 02:20:34 -- common/autotest_common.sh@1177 -- # local i=0 00:18:34.811 02:20:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.811 02:20:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:34.811 02:20:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:37.337 02:20:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:37.337 02:20:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:37.337 02:20:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:37.337 02:20:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:37.337 02:20:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.337 02:20:36 -- common/autotest_common.sh@1187 -- # return 0 00:18:37.337 02:20:36 -- target/initiator_timeout.sh@35 -- # fio_pid=90911 00:18:37.337 02:20:36 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:37.337 02:20:36 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:37.337 [global] 00:18:37.337 thread=1 00:18:37.337 invalidate=1 00:18:37.337 rw=write 00:18:37.337 time_based=1 00:18:37.337 runtime=60 00:18:37.337 ioengine=libaio 00:18:37.337 direct=1 00:18:37.337 bs=4096 00:18:37.337 iodepth=1 00:18:37.337 norandommap=0 00:18:37.337 numjobs=1 00:18:37.337 00:18:37.337 verify_dump=1 00:18:37.337 verify_backlog=512 00:18:37.337 verify_state_save=0 00:18:37.337 do_verify=1 00:18:37.337 verify=crc32c-intel 00:18:37.337 [job0] 00:18:37.337 filename=/dev/nvme0n1 00:18:37.337 Could not set queue depth (nvme0n1) 00:18:37.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:37.337 fio-3.35 00:18:37.337 Starting 1 thread 00:18:39.865 02:20:39 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:39.865 02:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.865 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:39.865 true 00:18:39.865 02:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.865 02:20:39 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:39.865 02:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.865 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:39.865 true 00:18:39.865 02:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.865 02:20:39 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:39.865 02:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.865 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:39.865 true 00:18:39.865 02:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.865 02:20:39 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:39.865 02:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:39.865 02:20:39 -- common/autotest_common.sh@10 -- # set +x 00:18:39.865 true 00:18:39.865 02:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:39.865 02:20:39 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:43.150 02:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.150 02:20:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 true 00:18:43.150 02:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:43.150 02:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.150 02:20:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 true 00:18:43.150 02:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:43.150 02:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.150 02:20:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 true 00:18:43.150 02:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:43.150 02:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.150 02:20:42 -- common/autotest_common.sh@10 -- # set +x 00:18:43.150 true 00:18:43.150 02:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:43.150 02:20:42 -- target/initiator_timeout.sh@54 -- # wait 90911 00:19:39.371 00:19:39.371 job0: (groupid=0, jobs=1): err= 0: pid=90932: Mon Jul 15 02:21:36 2024 00:19:39.371 read: IOPS=861, BW=3447KiB/s (3530kB/s)(202MiB/60000msec) 00:19:39.371 slat (usec): min=12, max=12357, avg=15.68, stdev=71.95 00:19:39.371 clat (usec): min=3, max=40640k, avg=971.75, stdev=178712.46 00:19:39.371 lat (usec): min=169, max=40640k, avg=987.42, stdev=178712.55 00:19:39.371 clat percentiles (usec): 00:19:39.371 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:19:39.371 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:19:39.371 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 206], 95.00th=[ 215], 00:19:39.371 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 265], 99.95th=[ 289], 00:19:39.371 | 99.99th=[ 1012] 00:19:39.371 write: IOPS=862, BW=3451KiB/s (3534kB/s)(202MiB/60000msec); 0 zone resets 00:19:39.371 slat (usec): min=18, max=660, avg=22.30, stdev= 6.45 00:19:39.371 clat (usec): min=3, max=1695, avg=146.69, stdev=16.13 00:19:39.371 lat (usec): min=140, max=1722, avg=168.99, stdev=17.67 00:19:39.371 clat percentiles (usec): 00:19:39.371 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:19:39.371 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:19:39.371 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:19:39.371 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 219], 99.95th=[ 255], 00:19:39.371 | 99.99th=[ 570] 00:19:39.371 bw ( KiB/s): min= 1968, max=12288, per=100.00%, avg=10397.54, stdev=1952.52, samples=39 00:19:39.371 iops : min= 492, max= 3072, avg=2599.38, stdev=488.13, samples=39 00:19:39.371 lat (usec) : 4=0.01%, 10=0.01%, 100=0.01%, 250=99.86%, 500=0.11% 00:19:39.371 lat (usec) : 750=0.02%, 1000=0.01% 00:19:39.371 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:39.371 cpu : usr=0.59%, sys=2.44%, ctx=103591, majf=0, minf=2 00:19:39.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.371 issued rwts: total=51712,51770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.371 00:19:39.371 Run status group 0 (all jobs): 00:19:39.371 READ: bw=3447KiB/s (3530kB/s), 3447KiB/s-3447KiB/s (3530kB/s-3530kB/s), io=202MiB (212MB), run=60000-60000msec 00:19:39.371 WRITE: bw=3451KiB/s (3534kB/s), 3451KiB/s-3451KiB/s (3534kB/s-3534kB/s), io=202MiB (212MB), run=60000-60000msec 00:19:39.371 00:19:39.371 Disk stats (read/write): 00:19:39.371 nvme0n1: ios=51628/51712, merge=0/0, ticks=9974/8167, in_queue=18141, util=99.49% 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:39.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:39.371 02:21:36 -- common/autotest_common.sh@1198 -- # local i=0 00:19:39.371 02:21:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:39.371 02:21:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:39.371 02:21:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:39.371 02:21:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:39.371 nvmf hotplug test: fio successful as expected 00:19:39.371 02:21:36 -- common/autotest_common.sh@1210 -- # return 0 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.371 02:21:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.371 02:21:36 -- common/autotest_common.sh@10 -- # set +x 00:19:39.371 02:21:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:39.371 02:21:36 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:39.371 02:21:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:39.371 02:21:36 -- nvmf/common.sh@116 -- # sync 00:19:39.371 02:21:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:39.371 02:21:36 -- nvmf/common.sh@119 -- # set +e 00:19:39.371 02:21:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:39.371 02:21:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:39.371 rmmod nvme_tcp 00:19:39.372 rmmod nvme_fabrics 00:19:39.372 rmmod nvme_keyring 00:19:39.372 02:21:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:39.372 02:21:36 -- nvmf/common.sh@123 -- # set -e 00:19:39.372 02:21:36 -- nvmf/common.sh@124 -- # return 0 00:19:39.372 02:21:36 -- nvmf/common.sh@477 -- # '[' -n 90823 ']' 00:19:39.372 02:21:36 -- nvmf/common.sh@478 -- # killprocess 90823 00:19:39.372 02:21:36 -- common/autotest_common.sh@926 -- # '[' -z 90823 ']' 00:19:39.372 02:21:36 -- common/autotest_common.sh@930 -- # kill -0 90823 00:19:39.372 02:21:36 -- common/autotest_common.sh@931 -- # uname 00:19:39.372 02:21:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.372 02:21:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90823 00:19:39.372 killing process with pid 90823 00:19:39.372 02:21:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:39.372 02:21:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:39.372 02:21:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90823' 00:19:39.372 02:21:36 -- common/autotest_common.sh@945 -- # kill 90823 00:19:39.372 02:21:36 -- common/autotest_common.sh@950 -- # wait 90823 00:19:39.372 02:21:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.372 02:21:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.372 02:21:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.372 02:21:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.372 02:21:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.372 02:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.372 02:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.372 02:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.372 02:21:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:39.372 00:19:39.372 real 1m4.474s 00:19:39.372 user 4m4.722s 00:19:39.372 sys 0m10.318s 00:19:39.372 02:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.372 ************************************ 00:19:39.372 END TEST nvmf_initiator_timeout 00:19:39.372 ************************************ 00:19:39.372 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.372 02:21:37 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:39.372 02:21:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:39.372 02:21:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:39.372 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.372 02:21:37 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:39.372 02:21:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:39.372 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.372 02:21:37 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:39.372 02:21:37 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:39.372 02:21:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:39.372 02:21:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.372 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.372 ************************************ 00:19:39.372 START TEST nvmf_multicontroller 00:19:39.372 ************************************ 00:19:39.372 02:21:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:39.372 * Looking for test storage... 00:19:39.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.372 02:21:37 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.372 02:21:37 -- nvmf/common.sh@7 -- # uname -s 00:19:39.372 02:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.372 02:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.372 02:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.372 02:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.372 02:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.372 02:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.372 02:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.372 02:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.372 02:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.372 02:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.372 02:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:39.372 02:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:39.372 02:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.372 02:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.372 02:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.372 02:21:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.372 02:21:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.372 02:21:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.372 02:21:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.372 02:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.372 02:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.372 02:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.372 02:21:37 -- paths/export.sh@5 -- # export PATH 00:19:39.372 02:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.372 02:21:37 -- nvmf/common.sh@46 -- # : 0 00:19:39.372 02:21:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:39.372 02:21:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:39.372 02:21:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:39.372 02:21:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.372 02:21:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.372 02:21:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:39.372 02:21:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:39.372 02:21:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:39.372 02:21:37 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.372 02:21:37 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.372 02:21:37 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:39.372 02:21:37 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:39.372 02:21:37 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.372 02:21:37 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:39.372 02:21:37 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:39.372 02:21:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:39.372 02:21:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.372 02:21:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:39.372 02:21:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:39.372 02:21:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:39.373 02:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.373 02:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.373 02:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.373 02:21:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:39.373 02:21:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.373 02:21:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.373 02:21:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:39.373 02:21:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:39.373 02:21:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:39.373 02:21:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:39.373 02:21:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:39.373 02:21:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.373 02:21:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:39.373 02:21:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:39.373 02:21:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:39.373 02:21:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:39.373 02:21:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:39.373 02:21:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:39.373 Cannot find device "nvmf_tgt_br" 00:19:39.373 02:21:37 -- nvmf/common.sh@154 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.373 Cannot find device "nvmf_tgt_br2" 00:19:39.373 02:21:37 -- nvmf/common.sh@155 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:39.373 02:21:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:39.373 Cannot find device "nvmf_tgt_br" 00:19:39.373 02:21:37 -- nvmf/common.sh@157 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:39.373 Cannot find device "nvmf_tgt_br2" 00:19:39.373 02:21:37 -- nvmf/common.sh@158 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:39.373 02:21:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:39.373 02:21:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.373 02:21:37 -- nvmf/common.sh@161 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.373 02:21:37 -- nvmf/common.sh@162 -- # true 00:19:39.373 02:21:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.373 02:21:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.373 02:21:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.373 02:21:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.373 02:21:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.373 02:21:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.373 02:21:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.373 02:21:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:39.373 02:21:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:39.373 02:21:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:39.373 02:21:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:39.373 02:21:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:39.373 02:21:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:39.373 02:21:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.373 02:21:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.373 02:21:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.373 02:21:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:39.373 02:21:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:39.373 02:21:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.373 02:21:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.373 02:21:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.373 02:21:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.373 02:21:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.373 02:21:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:39.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:19:39.373 00:19:39.373 --- 10.0.0.2 ping statistics --- 00:19:39.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.373 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:39.373 02:21:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:39.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:39.373 00:19:39.373 --- 10.0.0.3 ping statistics --- 00:19:39.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.373 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:39.373 02:21:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:39.373 00:19:39.373 --- 10.0.0.1 ping statistics --- 00:19:39.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.373 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:39.373 02:21:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.373 02:21:37 -- nvmf/common.sh@421 -- # return 0 00:19:39.373 02:21:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:39.373 02:21:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.373 02:21:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:39.373 02:21:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.373 02:21:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:39.373 02:21:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:39.373 02:21:37 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:39.373 02:21:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:39.373 02:21:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:39.373 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.373 02:21:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:39.373 02:21:37 -- nvmf/common.sh@469 -- # nvmfpid=91760 00:19:39.373 02:21:37 -- nvmf/common.sh@470 -- # waitforlisten 91760 00:19:39.373 02:21:37 -- common/autotest_common.sh@819 -- # '[' -z 91760 ']' 00:19:39.373 02:21:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.373 02:21:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:39.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.373 02:21:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.373 02:21:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:39.373 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:19:39.373 [2024-07-15 02:21:37.655661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:39.374 [2024-07-15 02:21:37.655749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.374 [2024-07-15 02:21:37.788067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.374 [2024-07-15 02:21:37.876576] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:39.374 [2024-07-15 02:21:37.876784] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.374 [2024-07-15 02:21:37.876798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.374 [2024-07-15 02:21:37.876807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.374 [2024-07-15 02:21:37.877227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.374 [2024-07-15 02:21:37.877518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.374 [2024-07-15 02:21:37.877523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.374 02:21:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:39.374 02:21:38 -- common/autotest_common.sh@852 -- # return 0 00:19:39.374 02:21:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:39.374 02:21:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.374 02:21:38 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 [2024-07-15 02:21:38.650822] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 Malloc0 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 [2024-07-15 02:21:38.719338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 [2024-07-15 02:21:38.727253] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 Malloc1 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:39.374 02:21:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:39.374 02:21:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:39.374 02:21:38 -- host/multicontroller.sh@44 -- # bdevperf_pid=91812 00:19:39.374 02:21:38 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.374 02:21:38 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:39.374 02:21:38 -- host/multicontroller.sh@47 -- # waitforlisten 91812 /var/tmp/bdevperf.sock 00:19:39.374 02:21:38 -- common/autotest_common.sh@819 -- # '[' -z 91812 ']' 00:19:39.374 02:21:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.374 02:21:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:39.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.374 02:21:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.374 02:21:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:39.374 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:40.310 02:21:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:40.310 02:21:39 -- common/autotest_common.sh@852 -- # return 0 00:19:40.310 02:21:39 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:40.310 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.310 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.569 NVMe0n1 00:19:40.569 02:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.569 02:21:39 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.569 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.569 02:21:39 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:40.569 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.569 02:21:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.569 1 00:19:40.569 02:21:39 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:40.569 02:21:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:40.569 02:21:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:40.569 02:21:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.569 02:21:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:40.569 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.569 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.569 2024/07/15 02:21:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:40.569 request: 00:19:40.569 { 00:19:40.569 "method": "bdev_nvme_attach_controller", 00:19:40.569 "params": { 00:19:40.569 "name": "NVMe0", 00:19:40.569 "trtype": "tcp", 00:19:40.569 "traddr": "10.0.0.2", 00:19:40.569 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:40.569 "hostaddr": "10.0.0.2", 00:19:40.569 "hostsvcid": "60000", 00:19:40.569 "adrfam": "ipv4", 00:19:40.569 "trsvcid": "4420", 00:19:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:40.569 } 00:19:40.569 } 00:19:40.569 Got JSON-RPC error response 00:19:40.569 GoRPCClient: error on JSON-RPC call 00:19:40.569 02:21:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:40.569 02:21:39 -- common/autotest_common.sh@643 -- # es=1 00:19:40.569 02:21:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:40.569 02:21:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:40.569 02:21:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:40.569 02:21:39 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:40.569 02:21:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:40.569 02:21:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:40.569 02:21:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:40.569 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.569 02:21:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:40.569 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.569 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.569 2024/07/15 02:21:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:40.569 request: 00:19:40.569 { 00:19:40.569 "method": "bdev_nvme_attach_controller", 00:19:40.569 "params": { 00:19:40.569 "name": "NVMe0", 00:19:40.569 "trtype": "tcp", 00:19:40.569 "traddr": "10.0.0.2", 00:19:40.569 "hostaddr": "10.0.0.2", 00:19:40.569 "hostsvcid": "60000", 00:19:40.569 "adrfam": "ipv4", 00:19:40.569 "trsvcid": "4420", 00:19:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:40.570 } 00:19:40.570 } 00:19:40.570 Got JSON-RPC error response 00:19:40.570 GoRPCClient: error on JSON-RPC call 00:19:40.570 02:21:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@643 -- # es=1 00:19:40.570 02:21:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:40.570 02:21:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:40.570 02:21:39 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:40.570 02:21:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.570 02:21:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.570 2024/07/15 02:21:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:40.570 request: 00:19:40.570 { 00:19:40.570 "method": "bdev_nvme_attach_controller", 00:19:40.570 "params": { 00:19:40.570 "name": "NVMe0", 00:19:40.570 "trtype": "tcp", 00:19:40.570 "traddr": "10.0.0.2", 00:19:40.570 "hostaddr": "10.0.0.2", 00:19:40.570 "hostsvcid": "60000", 00:19:40.570 "adrfam": "ipv4", 00:19:40.570 "trsvcid": "4420", 00:19:40.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.570 "multipath": "disable" 00:19:40.570 } 00:19:40.570 } 00:19:40.570 Got JSON-RPC error response 00:19:40.570 GoRPCClient: error on JSON-RPC call 00:19:40.570 02:21:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@643 -- # es=1 00:19:40.570 02:21:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:40.570 02:21:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:40.570 02:21:39 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:40.570 02:21:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:40.570 02:21:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:40.570 02:21:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:40.570 02:21:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:40.570 02:21:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:40.570 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.570 2024/07/15 02:21:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:40.570 request: 00:19:40.570 { 00:19:40.570 "method": "bdev_nvme_attach_controller", 00:19:40.570 "params": { 00:19:40.570 "name": "NVMe0", 00:19:40.570 "trtype": "tcp", 00:19:40.570 "traddr": "10.0.0.2", 00:19:40.570 "hostaddr": "10.0.0.2", 00:19:40.570 "hostsvcid": "60000", 00:19:40.570 "adrfam": "ipv4", 00:19:40.570 "trsvcid": "4420", 00:19:40.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.570 "multipath": "failover" 00:19:40.570 } 00:19:40.570 } 00:19:40.570 Got JSON-RPC error response 00:19:40.570 GoRPCClient: error on JSON-RPC call 00:19:40.570 02:21:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@643 -- # es=1 00:19:40.570 02:21:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:40.570 02:21:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:40.570 02:21:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:40.570 02:21:39 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:40.570 02:21:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:19:40.570 00:19:40.570 02:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.570 02:21:40 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:40.570 02:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:19:40.570 02:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.570 02:21:40 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:40.570 02:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:19:40.570 00:19:40.570 02:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.570 02:21:40 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.570 02:21:40 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:40.570 02:21:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.570 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:19:40.829 02:21:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.829 02:21:40 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:40.829 02:21:40 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.765 0 00:19:41.765 02:21:41 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:41.765 02:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.765 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:41.765 02:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.765 02:21:41 -- host/multicontroller.sh@100 -- # killprocess 91812 00:19:41.765 02:21:41 -- common/autotest_common.sh@926 -- # '[' -z 91812 ']' 00:19:41.765 02:21:41 -- common/autotest_common.sh@930 -- # kill -0 91812 00:19:41.765 02:21:41 -- common/autotest_common.sh@931 -- # uname 00:19:41.765 02:21:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.765 02:21:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91812 00:19:42.025 killing process with pid 91812 00:19:42.025 02:21:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:42.025 02:21:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:42.025 02:21:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91812' 00:19:42.025 02:21:41 -- common/autotest_common.sh@945 -- # kill 91812 00:19:42.025 02:21:41 -- common/autotest_common.sh@950 -- # wait 91812 00:19:42.025 02:21:41 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.025 02:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.025 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 02:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.025 02:21:41 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:42.025 02:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:42.025 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:42.025 02:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:42.025 02:21:41 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:42.025 02:21:41 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.025 02:21:41 -- common/autotest_common.sh@1597 -- # read -r file 00:19:42.025 02:21:41 -- common/autotest_common.sh@1596 -- # sort -u 00:19:42.025 02:21:41 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:42.025 02:21:41 -- common/autotest_common.sh@1598 -- # cat 00:19:42.025 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:42.025 [2024-07-15 02:21:38.842323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:42.025 [2024-07-15 02:21:38.842526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91812 ] 00:19:42.025 [2024-07-15 02:21:38.982492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.025 [2024-07-15 02:21:39.076118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.025 [2024-07-15 02:21:40.115452] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 8091ba25-c1d1-4443-a5fa-04324d7fbd53 already exists 00:19:42.025 [2024-07-15 02:21:40.115512] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:8091ba25-c1d1-4443-a5fa-04324d7fbd53 alias for bdev NVMe1n1 00:19:42.025 [2024-07-15 02:21:40.115533] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:42.025 Running I/O for 1 seconds... 00:19:42.025 00:19:42.025 Latency(us) 00:19:42.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.025 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:42.025 NVMe0n1 : 1.01 20498.96 80.07 0.00 0.00 6225.64 3395.96 10962.39 00:19:42.025 =================================================================================================================== 00:19:42.025 Total : 20498.96 80.07 0.00 0.00 6225.64 3395.96 10962.39 00:19:42.025 Received shutdown signal, test time was about 1.000000 seconds 00:19:42.025 00:19:42.025 Latency(us) 00:19:42.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.025 =================================================================================================================== 00:19:42.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.025 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:42.025 02:21:41 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:42.025 02:21:41 -- common/autotest_common.sh@1597 -- # read -r file 00:19:42.025 02:21:41 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:42.025 02:21:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:42.025 02:21:41 -- nvmf/common.sh@116 -- # sync 00:19:42.284 02:21:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:42.284 02:21:41 -- nvmf/common.sh@119 -- # set +e 00:19:42.284 02:21:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:42.284 02:21:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:42.284 rmmod nvme_tcp 00:19:42.284 rmmod nvme_fabrics 00:19:42.284 rmmod nvme_keyring 00:19:42.284 02:21:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:42.284 02:21:41 -- nvmf/common.sh@123 -- # set -e 00:19:42.284 02:21:41 -- nvmf/common.sh@124 -- # return 0 00:19:42.284 02:21:41 -- nvmf/common.sh@477 -- # '[' -n 91760 ']' 00:19:42.284 02:21:41 -- nvmf/common.sh@478 -- # killprocess 91760 00:19:42.284 02:21:41 -- common/autotest_common.sh@926 -- # '[' -z 91760 ']' 00:19:42.284 02:21:41 -- common/autotest_common.sh@930 -- # kill -0 91760 00:19:42.284 02:21:41 -- common/autotest_common.sh@931 -- # uname 00:19:42.284 02:21:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:42.284 02:21:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91760 00:19:42.284 02:21:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:42.284 02:21:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:42.284 killing process with pid 91760 00:19:42.284 02:21:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91760' 00:19:42.284 02:21:41 -- common/autotest_common.sh@945 -- # kill 91760 00:19:42.284 02:21:41 -- common/autotest_common.sh@950 -- # wait 91760 00:19:42.543 02:21:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:42.543 02:21:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:42.543 02:21:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:42.543 02:21:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.543 02:21:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:42.544 02:21:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.544 02:21:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.544 02:21:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.544 02:21:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:42.544 00:19:42.544 real 0m4.795s 00:19:42.544 user 0m15.126s 00:19:42.544 sys 0m1.069s 00:19:42.544 02:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.544 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:42.544 ************************************ 00:19:42.544 END TEST nvmf_multicontroller 00:19:42.544 ************************************ 00:19:42.544 02:21:41 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:42.544 02:21:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:42.544 02:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.544 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:42.544 ************************************ 00:19:42.544 START TEST nvmf_aer 00:19:42.544 ************************************ 00:19:42.544 02:21:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:42.544 * Looking for test storage... 00:19:42.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.544 02:21:42 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.544 02:21:42 -- nvmf/common.sh@7 -- # uname -s 00:19:42.544 02:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.544 02:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.544 02:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.544 02:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.544 02:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.544 02:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.544 02:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.544 02:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.544 02:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.544 02:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.544 02:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:42.544 02:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:42.544 02:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.544 02:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.544 02:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.544 02:21:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.544 02:21:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.544 02:21:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.544 02:21:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.544 02:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.544 02:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.544 02:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.544 02:21:42 -- paths/export.sh@5 -- # export PATH 00:19:42.544 02:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.544 02:21:42 -- nvmf/common.sh@46 -- # : 0 00:19:42.544 02:21:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.544 02:21:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.544 02:21:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.544 02:21:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.544 02:21:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.544 02:21:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.544 02:21:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.544 02:21:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.544 02:21:42 -- host/aer.sh@11 -- # nvmftestinit 00:19:42.544 02:21:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:42.544 02:21:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.544 02:21:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.544 02:21:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.544 02:21:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.544 02:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.544 02:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.544 02:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.803 02:21:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:42.803 02:21:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:42.803 02:21:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:42.803 02:21:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:42.803 02:21:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:42.803 02:21:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:42.803 02:21:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.803 02:21:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.803 02:21:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.803 02:21:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:42.803 02:21:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.803 02:21:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.803 02:21:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.803 02:21:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.803 02:21:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.803 02:21:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.803 02:21:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.803 02:21:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.803 02:21:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:42.803 02:21:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:42.803 Cannot find device "nvmf_tgt_br" 00:19:42.803 02:21:42 -- nvmf/common.sh@154 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.803 Cannot find device "nvmf_tgt_br2" 00:19:42.803 02:21:42 -- nvmf/common.sh@155 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:42.803 02:21:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:42.803 Cannot find device "nvmf_tgt_br" 00:19:42.803 02:21:42 -- nvmf/common.sh@157 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:42.803 Cannot find device "nvmf_tgt_br2" 00:19:42.803 02:21:42 -- nvmf/common.sh@158 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:42.803 02:21:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:42.803 02:21:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.803 02:21:42 -- nvmf/common.sh@161 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.803 02:21:42 -- nvmf/common.sh@162 -- # true 00:19:42.803 02:21:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.803 02:21:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.803 02:21:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.803 02:21:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.803 02:21:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.803 02:21:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.803 02:21:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.803 02:21:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.803 02:21:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.803 02:21:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:42.803 02:21:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:42.803 02:21:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:42.803 02:21:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:42.803 02:21:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.803 02:21:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.803 02:21:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.803 02:21:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:42.803 02:21:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:42.803 02:21:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.062 02:21:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.062 02:21:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.062 02:21:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.062 02:21:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.062 02:21:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:43.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:43.062 00:19:43.062 --- 10.0.0.2 ping statistics --- 00:19:43.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.062 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:43.062 02:21:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:43.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:19:43.062 00:19:43.062 --- 10.0.0.3 ping statistics --- 00:19:43.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.062 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:43.062 02:21:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:43.062 00:19:43.062 --- 10.0.0.1 ping statistics --- 00:19:43.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.062 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:43.062 02:21:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.062 02:21:42 -- nvmf/common.sh@421 -- # return 0 00:19:43.062 02:21:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.062 02:21:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.062 02:21:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.062 02:21:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.062 02:21:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.062 02:21:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.062 02:21:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.062 02:21:42 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:43.062 02:21:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.062 02:21:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.062 02:21:42 -- common/autotest_common.sh@10 -- # set +x 00:19:43.062 02:21:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.062 02:21:42 -- nvmf/common.sh@469 -- # nvmfpid=92053 00:19:43.062 02:21:42 -- nvmf/common.sh@470 -- # waitforlisten 92053 00:19:43.062 02:21:42 -- common/autotest_common.sh@819 -- # '[' -z 92053 ']' 00:19:43.062 02:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.062 02:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.062 02:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.062 02:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.062 02:21:42 -- common/autotest_common.sh@10 -- # set +x 00:19:43.062 [2024-07-15 02:21:42.505198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:43.062 [2024-07-15 02:21:42.505357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.320 [2024-07-15 02:21:42.653866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.320 [2024-07-15 02:21:42.751556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.320 [2024-07-15 02:21:42.751761] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.320 [2024-07-15 02:21:42.751779] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.320 [2024-07-15 02:21:42.751791] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.320 [2024-07-15 02:21:42.752250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.320 [2024-07-15 02:21:42.752415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.320 [2024-07-15 02:21:42.753112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.320 [2024-07-15 02:21:42.753177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.886 02:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.886 02:21:43 -- common/autotest_common.sh@852 -- # return 0 00:19:43.886 02:21:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.886 02:21:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.886 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 02:21:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.144 02:21:43 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 [2024-07-15 02:21:43.489396] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.144 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.144 02:21:43 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 Malloc0 00:19:44.144 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.144 02:21:43 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.144 02:21:43 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.144 02:21:43 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 [2024-07-15 02:21:43.554217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.144 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.144 02:21:43 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:44.144 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.144 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.144 [2024-07-15 02:21:43.562031] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:44.144 [ 00:19:44.144 { 00:19:44.144 "allow_any_host": true, 00:19:44.144 "hosts": [], 00:19:44.144 "listen_addresses": [], 00:19:44.144 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.144 "subtype": "Discovery" 00:19:44.144 }, 00:19:44.144 { 00:19:44.144 "allow_any_host": true, 00:19:44.144 "hosts": [], 00:19:44.144 "listen_addresses": [ 00:19:44.144 { 00:19:44.145 "adrfam": "IPv4", 00:19:44.145 "traddr": "10.0.0.2", 00:19:44.145 "transport": "TCP", 00:19:44.145 "trsvcid": "4420", 00:19:44.145 "trtype": "TCP" 00:19:44.145 } 00:19:44.145 ], 00:19:44.145 "max_cntlid": 65519, 00:19:44.145 "max_namespaces": 2, 00:19:44.145 "min_cntlid": 1, 00:19:44.145 "model_number": "SPDK bdev Controller", 00:19:44.145 "namespaces": [ 00:19:44.145 { 00:19:44.145 "bdev_name": "Malloc0", 00:19:44.145 "name": "Malloc0", 00:19:44.145 "nguid": "B7D130E58E3C46A28ED38A72BC5B462F", 00:19:44.145 "nsid": 1, 00:19:44.145 "uuid": "b7d130e5-8e3c-46a2-8ed3-8a72bc5b462f" 00:19:44.145 } 00:19:44.145 ], 00:19:44.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.145 "serial_number": "SPDK00000000000001", 00:19:44.145 "subtype": "NVMe" 00:19:44.145 } 00:19:44.145 ] 00:19:44.145 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.145 02:21:43 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:44.145 02:21:43 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:44.145 02:21:43 -- host/aer.sh@33 -- # aerpid=92110 00:19:44.145 02:21:43 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:44.145 02:21:43 -- common/autotest_common.sh@1244 -- # local i=0 00:19:44.145 02:21:43 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:44.145 02:21:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.145 02:21:43 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:44.145 02:21:43 -- common/autotest_common.sh@1247 -- # i=1 00:19:44.145 02:21:43 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:44.145 02:21:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.145 02:21:43 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:44.145 02:21:43 -- common/autotest_common.sh@1247 -- # i=2 00:19:44.145 02:21:43 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:44.403 02:21:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.403 02:21:43 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:44.403 02:21:43 -- common/autotest_common.sh@1255 -- # return 0 00:19:44.403 02:21:43 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 Malloc1 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 Asynchronous Event Request test 00:19:44.403 Attaching to 10.0.0.2 00:19:44.403 Attached to 10.0.0.2 00:19:44.403 Registering asynchronous event callbacks... 00:19:44.403 Starting namespace attribute notice tests for all controllers... 00:19:44.403 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:44.403 aer_cb - Changed Namespace 00:19:44.403 Cleaning up... 00:19:44.403 [ 00:19:44.403 { 00:19:44.403 "allow_any_host": true, 00:19:44.403 "hosts": [], 00:19:44.403 "listen_addresses": [], 00:19:44.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:44.403 "subtype": "Discovery" 00:19:44.403 }, 00:19:44.403 { 00:19:44.403 "allow_any_host": true, 00:19:44.403 "hosts": [], 00:19:44.403 "listen_addresses": [ 00:19:44.403 { 00:19:44.403 "adrfam": "IPv4", 00:19:44.403 "traddr": "10.0.0.2", 00:19:44.403 "transport": "TCP", 00:19:44.403 "trsvcid": "4420", 00:19:44.403 "trtype": "TCP" 00:19:44.403 } 00:19:44.403 ], 00:19:44.403 "max_cntlid": 65519, 00:19:44.403 "max_namespaces": 2, 00:19:44.403 "min_cntlid": 1, 00:19:44.403 "model_number": "SPDK bdev Controller", 00:19:44.403 "namespaces": [ 00:19:44.403 { 00:19:44.403 "bdev_name": "Malloc0", 00:19:44.403 "name": "Malloc0", 00:19:44.403 "nguid": "B7D130E58E3C46A28ED38A72BC5B462F", 00:19:44.403 "nsid": 1, 00:19:44.403 "uuid": "b7d130e5-8e3c-46a2-8ed3-8a72bc5b462f" 00:19:44.403 }, 00:19:44.403 { 00:19:44.403 "bdev_name": "Malloc1", 00:19:44.403 "name": "Malloc1", 00:19:44.403 "nguid": "E75988E1A523492693AADCED40275AC8", 00:19:44.403 "nsid": 2, 00:19:44.403 "uuid": "e75988e1-a523-4926-93aa-dced40275ac8" 00:19:44.403 } 00:19:44.403 ], 00:19:44.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.403 "serial_number": "SPDK00000000000001", 00:19:44.403 "subtype": "NVMe" 00:19:44.403 } 00:19:44.403 ] 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@43 -- # wait 92110 00:19:44.403 02:21:43 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.403 02:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.403 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:19:44.403 02:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.403 02:21:43 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:44.403 02:21:43 -- host/aer.sh@51 -- # nvmftestfini 00:19:44.403 02:21:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.403 02:21:43 -- nvmf/common.sh@116 -- # sync 00:19:44.665 02:21:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.665 02:21:43 -- nvmf/common.sh@119 -- # set +e 00:19:44.665 02:21:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.665 02:21:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.665 rmmod nvme_tcp 00:19:44.665 rmmod nvme_fabrics 00:19:44.665 rmmod nvme_keyring 00:19:44.665 02:21:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.665 02:21:44 -- nvmf/common.sh@123 -- # set -e 00:19:44.665 02:21:44 -- nvmf/common.sh@124 -- # return 0 00:19:44.665 02:21:44 -- nvmf/common.sh@477 -- # '[' -n 92053 ']' 00:19:44.665 02:21:44 -- nvmf/common.sh@478 -- # killprocess 92053 00:19:44.665 02:21:44 -- common/autotest_common.sh@926 -- # '[' -z 92053 ']' 00:19:44.665 02:21:44 -- common/autotest_common.sh@930 -- # kill -0 92053 00:19:44.665 02:21:44 -- common/autotest_common.sh@931 -- # uname 00:19:44.665 02:21:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.665 02:21:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92053 00:19:44.665 02:21:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:44.665 02:21:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:44.665 killing process with pid 92053 00:19:44.665 02:21:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92053' 00:19:44.665 02:21:44 -- common/autotest_common.sh@945 -- # kill 92053 00:19:44.665 [2024-07-15 02:21:44.088616] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:44.665 02:21:44 -- common/autotest_common.sh@950 -- # wait 92053 00:19:44.924 02:21:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:44.924 02:21:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:44.924 02:21:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:44.924 02:21:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.924 02:21:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:44.924 02:21:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.924 02:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.924 02:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.924 02:21:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:44.924 00:19:44.924 real 0m2.326s 00:19:44.924 user 0m6.309s 00:19:44.924 sys 0m0.672s 00:19:44.924 02:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.924 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:19:44.924 ************************************ 00:19:44.924 END TEST nvmf_aer 00:19:44.924 ************************************ 00:19:44.924 02:21:44 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:44.924 02:21:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:44.924 02:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.924 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:19:44.924 ************************************ 00:19:44.924 START TEST nvmf_async_init 00:19:44.924 ************************************ 00:19:44.924 02:21:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:44.924 * Looking for test storage... 00:19:44.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:44.924 02:21:44 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.924 02:21:44 -- nvmf/common.sh@7 -- # uname -s 00:19:44.924 02:21:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.924 02:21:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.924 02:21:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.924 02:21:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.924 02:21:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.924 02:21:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.924 02:21:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.924 02:21:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.924 02:21:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.924 02:21:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.924 02:21:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:44.924 02:21:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:44.924 02:21:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.924 02:21:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.924 02:21:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.924 02:21:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.924 02:21:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.924 02:21:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.924 02:21:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.924 02:21:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.924 02:21:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.924 02:21:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.924 02:21:44 -- paths/export.sh@5 -- # export PATH 00:19:44.924 02:21:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.924 02:21:44 -- nvmf/common.sh@46 -- # : 0 00:19:44.924 02:21:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.924 02:21:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.924 02:21:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.924 02:21:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.924 02:21:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.924 02:21:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.924 02:21:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.924 02:21:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.924 02:21:44 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:44.924 02:21:44 -- host/async_init.sh@14 -- # null_block_size=512 00:19:44.924 02:21:44 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:44.924 02:21:44 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:44.924 02:21:44 -- host/async_init.sh@20 -- # uuidgen 00:19:44.924 02:21:44 -- host/async_init.sh@20 -- # tr -d - 00:19:45.183 02:21:44 -- host/async_init.sh@20 -- # nguid=8415507955a844939e792c06bb30c298 00:19:45.183 02:21:44 -- host/async_init.sh@22 -- # nvmftestinit 00:19:45.183 02:21:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:45.183 02:21:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.183 02:21:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:45.183 02:21:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:45.183 02:21:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:45.183 02:21:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.183 02:21:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.183 02:21:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.183 02:21:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:45.183 02:21:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:45.183 02:21:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:45.183 02:21:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:45.183 02:21:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:45.183 02:21:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:45.183 02:21:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.183 02:21:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.183 02:21:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:45.183 02:21:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:45.183 02:21:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:45.183 02:21:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:45.183 02:21:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:45.183 02:21:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.183 02:21:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:45.183 02:21:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:45.183 02:21:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:45.183 02:21:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:45.183 02:21:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:45.183 02:21:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:45.183 Cannot find device "nvmf_tgt_br" 00:19:45.183 02:21:44 -- nvmf/common.sh@154 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:45.183 Cannot find device "nvmf_tgt_br2" 00:19:45.183 02:21:44 -- nvmf/common.sh@155 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:45.183 02:21:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:45.183 Cannot find device "nvmf_tgt_br" 00:19:45.183 02:21:44 -- nvmf/common.sh@157 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:45.183 Cannot find device "nvmf_tgt_br2" 00:19:45.183 02:21:44 -- nvmf/common.sh@158 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:45.183 02:21:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:45.183 02:21:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:45.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.183 02:21:44 -- nvmf/common.sh@161 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:45.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:45.183 02:21:44 -- nvmf/common.sh@162 -- # true 00:19:45.183 02:21:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:45.183 02:21:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:45.183 02:21:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:45.183 02:21:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:45.183 02:21:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:45.183 02:21:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:45.183 02:21:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:45.183 02:21:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:45.183 02:21:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:45.446 02:21:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:45.446 02:21:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:45.446 02:21:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:45.446 02:21:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:45.446 02:21:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:45.446 02:21:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:45.446 02:21:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:45.446 02:21:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:45.446 02:21:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:45.446 02:21:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:45.446 02:21:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:45.446 02:21:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:45.446 02:21:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:45.446 02:21:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.446 02:21:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:45.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:45.446 00:19:45.446 --- 10.0.0.2 ping statistics --- 00:19:45.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.446 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:45.446 02:21:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:45.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:45.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:45.446 00:19:45.446 --- 10.0.0.3 ping statistics --- 00:19:45.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.446 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:45.446 02:21:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:45.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:45.446 00:19:45.446 --- 10.0.0.1 ping statistics --- 00:19:45.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.446 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:45.446 02:21:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.446 02:21:44 -- nvmf/common.sh@421 -- # return 0 00:19:45.446 02:21:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:45.446 02:21:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.446 02:21:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:45.446 02:21:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:45.446 02:21:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.446 02:21:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:45.446 02:21:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:45.446 02:21:44 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:45.446 02:21:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:45.446 02:21:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:45.446 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:19:45.446 02:21:44 -- nvmf/common.sh@469 -- # nvmfpid=92288 00:19:45.446 02:21:44 -- nvmf/common.sh@470 -- # waitforlisten 92288 00:19:45.446 02:21:44 -- common/autotest_common.sh@819 -- # '[' -z 92288 ']' 00:19:45.446 02:21:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:45.446 02:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.446 02:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:45.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.446 02:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.446 02:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:45.446 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:19:45.446 [2024-07-15 02:21:44.933778] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:45.446 [2024-07-15 02:21:44.933863] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.713 [2024-07-15 02:21:45.071705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.713 [2024-07-15 02:21:45.159685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:45.713 [2024-07-15 02:21:45.159935] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.713 [2024-07-15 02:21:45.159957] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.713 [2024-07-15 02:21:45.159981] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.713 [2024-07-15 02:21:45.160016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.646 02:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:46.646 02:21:45 -- common/autotest_common.sh@852 -- # return 0 00:19:46.646 02:21:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:46.646 02:21:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 02:21:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.646 02:21:45 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 [2024-07-15 02:21:45.930316] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 null0 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8415507955a844939e792c06bb30c298 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 [2024-07-15 02:21:45.970452] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.646 02:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:45 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:46.646 02:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.646 02:21:45 -- common/autotest_common.sh@10 -- # set +x 00:19:46.646 nvme0n1 00:19:46.646 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.646 02:21:46 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:46.646 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.904 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.904 [ 00:19:46.904 { 00:19:46.904 "aliases": [ 00:19:46.904 "84155079-55a8-4493-9e79-2c06bb30c298" 00:19:46.904 ], 00:19:46.904 "assigned_rate_limits": { 00:19:46.904 "r_mbytes_per_sec": 0, 00:19:46.904 "rw_ios_per_sec": 0, 00:19:46.904 "rw_mbytes_per_sec": 0, 00:19:46.904 "w_mbytes_per_sec": 0 00:19:46.904 }, 00:19:46.904 "block_size": 512, 00:19:46.904 "claimed": false, 00:19:46.904 "driver_specific": { 00:19:46.904 "mp_policy": "active_passive", 00:19:46.904 "nvme": [ 00:19:46.904 { 00:19:46.904 "ctrlr_data": { 00:19:46.904 "ana_reporting": false, 00:19:46.904 "cntlid": 1, 00:19:46.904 "firmware_revision": "24.01.1", 00:19:46.904 "model_number": "SPDK bdev Controller", 00:19:46.904 "multi_ctrlr": true, 00:19:46.904 "oacs": { 00:19:46.904 "firmware": 0, 00:19:46.904 "format": 0, 00:19:46.904 "ns_manage": 0, 00:19:46.904 "security": 0 00:19:46.904 }, 00:19:46.904 "serial_number": "00000000000000000000", 00:19:46.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.904 "vendor_id": "0x8086" 00:19:46.904 }, 00:19:46.904 "ns_data": { 00:19:46.904 "can_share": true, 00:19:46.904 "id": 1 00:19:46.904 }, 00:19:46.904 "trid": { 00:19:46.904 "adrfam": "IPv4", 00:19:46.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.904 "traddr": "10.0.0.2", 00:19:46.904 "trsvcid": "4420", 00:19:46.904 "trtype": "TCP" 00:19:46.904 }, 00:19:46.904 "vs": { 00:19:46.904 "nvme_version": "1.3" 00:19:46.904 } 00:19:46.904 } 00:19:46.904 ] 00:19:46.904 }, 00:19:46.904 "name": "nvme0n1", 00:19:46.904 "num_blocks": 2097152, 00:19:46.904 "product_name": "NVMe disk", 00:19:46.904 "supported_io_types": { 00:19:46.904 "abort": true, 00:19:46.905 "compare": true, 00:19:46.905 "compare_and_write": true, 00:19:46.905 "flush": true, 00:19:46.905 "nvme_admin": true, 00:19:46.905 "nvme_io": true, 00:19:46.905 "read": true, 00:19:46.905 "reset": true, 00:19:46.905 "unmap": false, 00:19:46.905 "write": true, 00:19:46.905 "write_zeroes": true 00:19:46.905 }, 00:19:46.905 "uuid": "84155079-55a8-4493-9e79-2c06bb30c298", 00:19:46.905 "zoned": false 00:19:46.905 } 00:19:46.905 ] 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 [2024-07-15 02:21:46.226527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:46.905 [2024-07-15 02:21:46.226662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b7c20 (9): Bad file descriptor 00:19:46.905 [2024-07-15 02:21:46.358811] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 [ 00:19:46.905 { 00:19:46.905 "aliases": [ 00:19:46.905 "84155079-55a8-4493-9e79-2c06bb30c298" 00:19:46.905 ], 00:19:46.905 "assigned_rate_limits": { 00:19:46.905 "r_mbytes_per_sec": 0, 00:19:46.905 "rw_ios_per_sec": 0, 00:19:46.905 "rw_mbytes_per_sec": 0, 00:19:46.905 "w_mbytes_per_sec": 0 00:19:46.905 }, 00:19:46.905 "block_size": 512, 00:19:46.905 "claimed": false, 00:19:46.905 "driver_specific": { 00:19:46.905 "mp_policy": "active_passive", 00:19:46.905 "nvme": [ 00:19:46.905 { 00:19:46.905 "ctrlr_data": { 00:19:46.905 "ana_reporting": false, 00:19:46.905 "cntlid": 2, 00:19:46.905 "firmware_revision": "24.01.1", 00:19:46.905 "model_number": "SPDK bdev Controller", 00:19:46.905 "multi_ctrlr": true, 00:19:46.905 "oacs": { 00:19:46.905 "firmware": 0, 00:19:46.905 "format": 0, 00:19:46.905 "ns_manage": 0, 00:19:46.905 "security": 0 00:19:46.905 }, 00:19:46.905 "serial_number": "00000000000000000000", 00:19:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.905 "vendor_id": "0x8086" 00:19:46.905 }, 00:19:46.905 "ns_data": { 00:19:46.905 "can_share": true, 00:19:46.905 "id": 1 00:19:46.905 }, 00:19:46.905 "trid": { 00:19:46.905 "adrfam": "IPv4", 00:19:46.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.905 "traddr": "10.0.0.2", 00:19:46.905 "trsvcid": "4420", 00:19:46.905 "trtype": "TCP" 00:19:46.905 }, 00:19:46.905 "vs": { 00:19:46.905 "nvme_version": "1.3" 00:19:46.905 } 00:19:46.905 } 00:19:46.905 ] 00:19:46.905 }, 00:19:46.905 "name": "nvme0n1", 00:19:46.905 "num_blocks": 2097152, 00:19:46.905 "product_name": "NVMe disk", 00:19:46.905 "supported_io_types": { 00:19:46.905 "abort": true, 00:19:46.905 "compare": true, 00:19:46.905 "compare_and_write": true, 00:19:46.905 "flush": true, 00:19:46.905 "nvme_admin": true, 00:19:46.905 "nvme_io": true, 00:19:46.905 "read": true, 00:19:46.905 "reset": true, 00:19:46.905 "unmap": false, 00:19:46.905 "write": true, 00:19:46.905 "write_zeroes": true 00:19:46.905 }, 00:19:46.905 "uuid": "84155079-55a8-4493-9e79-2c06bb30c298", 00:19:46.905 "zoned": false 00:19:46.905 } 00:19:46.905 ] 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@53 -- # mktemp 00:19:46.905 02:21:46 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hU80JmVjXJ 00:19:46.905 02:21:46 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:46.905 02:21:46 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hU80JmVjXJ 00:19:46.905 02:21:46 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 [2024-07-15 02:21:46.422702] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.905 [2024-07-15 02:21:46.422864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hU80JmVjXJ 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.905 02:21:46 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hU80JmVjXJ 00:19:46.905 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.905 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:46.905 [2024-07-15 02:21:46.438682] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.163 nvme0n1 00:19:47.163 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.163 02:21:46 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:47.163 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.163 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.163 [ 00:19:47.163 { 00:19:47.163 "aliases": [ 00:19:47.163 "84155079-55a8-4493-9e79-2c06bb30c298" 00:19:47.163 ], 00:19:47.163 "assigned_rate_limits": { 00:19:47.163 "r_mbytes_per_sec": 0, 00:19:47.163 "rw_ios_per_sec": 0, 00:19:47.163 "rw_mbytes_per_sec": 0, 00:19:47.163 "w_mbytes_per_sec": 0 00:19:47.163 }, 00:19:47.163 "block_size": 512, 00:19:47.163 "claimed": false, 00:19:47.163 "driver_specific": { 00:19:47.163 "mp_policy": "active_passive", 00:19:47.163 "nvme": [ 00:19:47.163 { 00:19:47.163 "ctrlr_data": { 00:19:47.163 "ana_reporting": false, 00:19:47.163 "cntlid": 3, 00:19:47.163 "firmware_revision": "24.01.1", 00:19:47.163 "model_number": "SPDK bdev Controller", 00:19:47.163 "multi_ctrlr": true, 00:19:47.163 "oacs": { 00:19:47.163 "firmware": 0, 00:19:47.163 "format": 0, 00:19:47.163 "ns_manage": 0, 00:19:47.163 "security": 0 00:19:47.163 }, 00:19:47.163 "serial_number": "00000000000000000000", 00:19:47.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:47.163 "vendor_id": "0x8086" 00:19:47.163 }, 00:19:47.163 "ns_data": { 00:19:47.163 "can_share": true, 00:19:47.163 "id": 1 00:19:47.163 }, 00:19:47.163 "trid": { 00:19:47.163 "adrfam": "IPv4", 00:19:47.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:47.163 "traddr": "10.0.0.2", 00:19:47.163 "trsvcid": "4421", 00:19:47.163 "trtype": "TCP" 00:19:47.163 }, 00:19:47.163 "vs": { 00:19:47.163 "nvme_version": "1.3" 00:19:47.163 } 00:19:47.163 } 00:19:47.163 ] 00:19:47.163 }, 00:19:47.163 "name": "nvme0n1", 00:19:47.163 "num_blocks": 2097152, 00:19:47.163 "product_name": "NVMe disk", 00:19:47.163 "supported_io_types": { 00:19:47.163 "abort": true, 00:19:47.163 "compare": true, 00:19:47.163 "compare_and_write": true, 00:19:47.163 "flush": true, 00:19:47.163 "nvme_admin": true, 00:19:47.163 "nvme_io": true, 00:19:47.163 "read": true, 00:19:47.163 "reset": true, 00:19:47.163 "unmap": false, 00:19:47.163 "write": true, 00:19:47.163 "write_zeroes": true 00:19:47.163 }, 00:19:47.163 "uuid": "84155079-55a8-4493-9e79-2c06bb30c298", 00:19:47.163 "zoned": false 00:19:47.163 } 00:19:47.163 ] 00:19:47.163 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.163 02:21:46 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.163 02:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:47.163 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.163 02:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:47.163 02:21:46 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.hU80JmVjXJ 00:19:47.163 02:21:46 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:47.163 02:21:46 -- host/async_init.sh@78 -- # nvmftestfini 00:19:47.163 02:21:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:47.163 02:21:46 -- nvmf/common.sh@116 -- # sync 00:19:47.163 02:21:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:47.163 02:21:46 -- nvmf/common.sh@119 -- # set +e 00:19:47.163 02:21:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:47.163 02:21:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:47.163 rmmod nvme_tcp 00:19:47.163 rmmod nvme_fabrics 00:19:47.163 rmmod nvme_keyring 00:19:47.163 02:21:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:47.163 02:21:46 -- nvmf/common.sh@123 -- # set -e 00:19:47.163 02:21:46 -- nvmf/common.sh@124 -- # return 0 00:19:47.163 02:21:46 -- nvmf/common.sh@477 -- # '[' -n 92288 ']' 00:19:47.163 02:21:46 -- nvmf/common.sh@478 -- # killprocess 92288 00:19:47.163 02:21:46 -- common/autotest_common.sh@926 -- # '[' -z 92288 ']' 00:19:47.163 02:21:46 -- common/autotest_common.sh@930 -- # kill -0 92288 00:19:47.163 02:21:46 -- common/autotest_common.sh@931 -- # uname 00:19:47.163 02:21:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:47.163 02:21:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92288 00:19:47.163 02:21:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:47.163 02:21:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:47.163 killing process with pid 92288 00:19:47.163 02:21:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92288' 00:19:47.163 02:21:46 -- common/autotest_common.sh@945 -- # kill 92288 00:19:47.163 02:21:46 -- common/autotest_common.sh@950 -- # wait 92288 00:19:47.420 02:21:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:47.420 02:21:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:47.420 02:21:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:47.420 02:21:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.420 02:21:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:47.420 02:21:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.420 02:21:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.420 02:21:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.420 02:21:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:47.420 00:19:47.420 real 0m2.530s 00:19:47.420 user 0m2.271s 00:19:47.420 sys 0m0.595s 00:19:47.420 02:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.421 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.421 ************************************ 00:19:47.421 END TEST nvmf_async_init 00:19:47.421 ************************************ 00:19:47.421 02:21:46 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:47.421 02:21:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:47.421 02:21:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.421 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:47.421 ************************************ 00:19:47.421 START TEST dma 00:19:47.421 ************************************ 00:19:47.421 02:21:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:47.678 * Looking for test storage... 00:19:47.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.678 02:21:47 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.678 02:21:47 -- nvmf/common.sh@7 -- # uname -s 00:19:47.678 02:21:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.678 02:21:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.678 02:21:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.678 02:21:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.678 02:21:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.678 02:21:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.678 02:21:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.678 02:21:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.678 02:21:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.678 02:21:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.678 02:21:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:47.678 02:21:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:47.678 02:21:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.678 02:21:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.678 02:21:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.678 02:21:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.678 02:21:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.678 02:21:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.678 02:21:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.678 02:21:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@5 -- # export PATH 00:19:47.679 02:21:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- nvmf/common.sh@46 -- # : 0 00:19:47.679 02:21:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.679 02:21:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.679 02:21:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.679 02:21:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.679 02:21:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.679 02:21:47 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:47.679 02:21:47 -- host/dma.sh@13 -- # exit 0 00:19:47.679 00:19:47.679 real 0m0.103s 00:19:47.679 user 0m0.052s 00:19:47.679 sys 0m0.058s 00:19:47.679 02:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.679 02:21:47 -- common/autotest_common.sh@10 -- # set +x 00:19:47.679 ************************************ 00:19:47.679 END TEST dma 00:19:47.679 ************************************ 00:19:47.679 02:21:47 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:47.679 02:21:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:47.679 02:21:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.679 02:21:47 -- common/autotest_common.sh@10 -- # set +x 00:19:47.679 ************************************ 00:19:47.679 START TEST nvmf_identify 00:19:47.679 ************************************ 00:19:47.679 02:21:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:47.679 * Looking for test storage... 00:19:47.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:47.679 02:21:47 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.679 02:21:47 -- nvmf/common.sh@7 -- # uname -s 00:19:47.679 02:21:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.679 02:21:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.679 02:21:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.679 02:21:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.679 02:21:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.679 02:21:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.679 02:21:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.679 02:21:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.679 02:21:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.679 02:21:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:47.679 02:21:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:47.679 02:21:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.679 02:21:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.679 02:21:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.679 02:21:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.679 02:21:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.679 02:21:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.679 02:21:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.679 02:21:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- paths/export.sh@5 -- # export PATH 00:19:47.679 02:21:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.679 02:21:47 -- nvmf/common.sh@46 -- # : 0 00:19:47.679 02:21:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.679 02:21:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.679 02:21:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.679 02:21:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.679 02:21:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.679 02:21:47 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:47.679 02:21:47 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:47.679 02:21:47 -- host/identify.sh@14 -- # nvmftestinit 00:19:47.679 02:21:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.679 02:21:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.679 02:21:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.679 02:21:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.679 02:21:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.679 02:21:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.679 02:21:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.679 02:21:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.679 02:21:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:47.679 02:21:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:47.679 02:21:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.679 02:21:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.679 02:21:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.679 02:21:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:47.679 02:21:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.679 02:21:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.679 02:21:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.679 02:21:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.679 02:21:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.679 02:21:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.679 02:21:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.679 02:21:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.679 02:21:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:47.679 02:21:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:47.937 Cannot find device "nvmf_tgt_br" 00:19:47.937 02:21:47 -- nvmf/common.sh@154 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.937 Cannot find device "nvmf_tgt_br2" 00:19:47.937 02:21:47 -- nvmf/common.sh@155 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:47.937 02:21:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:47.937 Cannot find device "nvmf_tgt_br" 00:19:47.937 02:21:47 -- nvmf/common.sh@157 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:47.937 Cannot find device "nvmf_tgt_br2" 00:19:47.937 02:21:47 -- nvmf/common.sh@158 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:47.937 02:21:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:47.937 02:21:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.937 02:21:47 -- nvmf/common.sh@161 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.937 02:21:47 -- nvmf/common.sh@162 -- # true 00:19:47.937 02:21:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.937 02:21:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.937 02:21:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.937 02:21:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.937 02:21:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.937 02:21:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.937 02:21:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.937 02:21:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.937 02:21:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.937 02:21:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:47.937 02:21:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:47.937 02:21:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:47.937 02:21:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:47.937 02:21:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.937 02:21:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.937 02:21:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.937 02:21:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:47.937 02:21:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:47.937 02:21:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.937 02:21:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.937 02:21:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.937 02:21:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.937 02:21:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.196 02:21:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:48.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:19:48.196 00:19:48.196 --- 10.0.0.2 ping statistics --- 00:19:48.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.196 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:48.196 02:21:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:48.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:48.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:19:48.196 00:19:48.196 --- 10.0.0.3 ping statistics --- 00:19:48.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.196 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:48.196 02:21:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:48.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:48.196 00:19:48.196 --- 10.0.0.1 ping statistics --- 00:19:48.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.196 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:48.196 02:21:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.196 02:21:47 -- nvmf/common.sh@421 -- # return 0 00:19:48.196 02:21:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:48.196 02:21:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.196 02:21:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:48.196 02:21:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:48.196 02:21:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.196 02:21:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:48.196 02:21:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:48.196 02:21:47 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:48.196 02:21:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:48.196 02:21:47 -- common/autotest_common.sh@10 -- # set +x 00:19:48.196 02:21:47 -- host/identify.sh@19 -- # nvmfpid=92549 00:19:48.196 02:21:47 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.196 02:21:47 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.196 02:21:47 -- host/identify.sh@23 -- # waitforlisten 92549 00:19:48.196 02:21:47 -- common/autotest_common.sh@819 -- # '[' -z 92549 ']' 00:19:48.196 02:21:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.196 02:21:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:48.196 02:21:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.196 02:21:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:48.196 02:21:47 -- common/autotest_common.sh@10 -- # set +x 00:19:48.196 [2024-07-15 02:21:47.592297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:48.196 [2024-07-15 02:21:47.592395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.196 [2024-07-15 02:21:47.732887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.454 [2024-07-15 02:21:47.818537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:48.454 [2024-07-15 02:21:47.818707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.454 [2024-07-15 02:21:47.818721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.454 [2024-07-15 02:21:47.818729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.454 [2024-07-15 02:21:47.819277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.454 [2024-07-15 02:21:47.819471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.454 [2024-07-15 02:21:47.819666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.454 [2024-07-15 02:21:47.819671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.018 02:21:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.018 02:21:48 -- common/autotest_common.sh@852 -- # return 0 00:19:49.018 02:21:48 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.018 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.018 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.018 [2024-07-15 02:21:48.538809] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.018 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.018 02:21:48 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:49.018 02:21:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:49.019 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 02:21:48 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 Malloc0 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 [2024-07-15 02:21:48.646081] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:49.277 02:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.277 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:19:49.277 [2024-07-15 02:21:48.661855] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:49.277 [ 00:19:49.277 { 00:19:49.277 "allow_any_host": true, 00:19:49.277 "hosts": [], 00:19:49.277 "listen_addresses": [ 00:19:49.277 { 00:19:49.277 "adrfam": "IPv4", 00:19:49.277 "traddr": "10.0.0.2", 00:19:49.277 "transport": "TCP", 00:19:49.277 "trsvcid": "4420", 00:19:49.277 "trtype": "TCP" 00:19:49.277 } 00:19:49.277 ], 00:19:49.277 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:49.277 "subtype": "Discovery" 00:19:49.277 }, 00:19:49.277 { 00:19:49.277 "allow_any_host": true, 00:19:49.277 "hosts": [], 00:19:49.277 "listen_addresses": [ 00:19:49.277 { 00:19:49.277 "adrfam": "IPv4", 00:19:49.277 "traddr": "10.0.0.2", 00:19:49.277 "transport": "TCP", 00:19:49.277 "trsvcid": "4420", 00:19:49.277 "trtype": "TCP" 00:19:49.277 } 00:19:49.277 ], 00:19:49.277 "max_cntlid": 65519, 00:19:49.277 "max_namespaces": 32, 00:19:49.277 "min_cntlid": 1, 00:19:49.277 "model_number": "SPDK bdev Controller", 00:19:49.277 "namespaces": [ 00:19:49.277 { 00:19:49.277 "bdev_name": "Malloc0", 00:19:49.277 "eui64": "ABCDEF0123456789", 00:19:49.277 "name": "Malloc0", 00:19:49.277 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:49.277 "nsid": 1, 00:19:49.277 "uuid": "2b80fb6b-19be-4a31-82b7-c04f8ba27190" 00:19:49.277 } 00:19:49.277 ], 00:19:49.277 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.277 "serial_number": "SPDK00000000000001", 00:19:49.277 "subtype": "NVMe" 00:19:49.277 } 00:19:49.277 ] 00:19:49.277 02:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.277 02:21:48 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:49.277 [2024-07-15 02:21:48.694035] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:49.277 [2024-07-15 02:21:48.694084] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92603 ] 00:19:49.277 [2024-07-15 02:21:48.833506] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:49.277 [2024-07-15 02:21:48.833578] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:49.277 [2024-07-15 02:21:48.833585] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:49.277 [2024-07-15 02:21:48.833648] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:49.277 [2024-07-15 02:21:48.833662] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:49.277 [2024-07-15 02:21:48.833830] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:49.277 [2024-07-15 02:21:48.833887] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b3b6c0 0 00:19:49.540 [2024-07-15 02:21:48.842694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:49.540 [2024-07-15 02:21:48.842719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:49.540 [2024-07-15 02:21:48.842725] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:49.540 [2024-07-15 02:21:48.842729] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:49.540 [2024-07-15 02:21:48.842778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.842785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.842790] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.842816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:49.540 [2024-07-15 02:21:48.842857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.850664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.850687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.850692] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.850712] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:49.540 [2024-07-15 02:21:48.850720] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:49.540 [2024-07-15 02:21:48.850726] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:49.540 [2024-07-15 02:21:48.850743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.850761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.850790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.850899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.850906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.850910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.850921] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:49.540 [2024-07-15 02:21:48.850929] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:49.540 [2024-07-15 02:21:48.850937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.850945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.850954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.850974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.851033] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.851040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.851044] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.851055] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:49.540 [2024-07-15 02:21:48.851064] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851072] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.851088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.851106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.851162] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.851169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.851173] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851177] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.851184] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851199] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.851210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.851229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.851288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.851295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.851298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.851308] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:49.540 [2024-07-15 02:21:48.851314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851322] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851428] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:49.540 [2024-07-15 02:21:48.851433] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.851460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.851479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.851541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.851549] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.851552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851557] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.851563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:49.540 [2024-07-15 02:21:48.851574] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851578] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.540 [2024-07-15 02:21:48.851589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.540 [2024-07-15 02:21:48.851608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.540 [2024-07-15 02:21:48.851675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.540 [2024-07-15 02:21:48.851684] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.540 [2024-07-15 02:21:48.851688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851692] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.540 [2024-07-15 02:21:48.851699] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:49.540 [2024-07-15 02:21:48.851704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:49.540 [2024-07-15 02:21:48.851713] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:49.540 [2024-07-15 02:21:48.851729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:49.540 [2024-07-15 02:21:48.851739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.540 [2024-07-15 02:21:48.851743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851747] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.851755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.541 [2024-07-15 02:21:48.851777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.541 [2024-07-15 02:21:48.851872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.541 [2024-07-15 02:21:48.851880] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.541 [2024-07-15 02:21:48.851884] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851888] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b3b6c0): datao=0, datal=4096, cccid=0 00:19:49.541 [2024-07-15 02:21:48.851893] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b71f60) on tqpair(0x1b3b6c0): expected_datao=0, payload_size=4096 00:19:49.541 [2024-07-15 02:21:48.851903] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851908] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.851923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.851926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851930] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.851940] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:49.541 [2024-07-15 02:21:48.851946] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:49.541 [2024-07-15 02:21:48.851951] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:49.541 [2024-07-15 02:21:48.851957] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:49.541 [2024-07-15 02:21:48.851962] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:49.541 [2024-07-15 02:21:48.851968] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:49.541 [2024-07-15 02:21:48.851982] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:49.541 [2024-07-15 02:21:48.851991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.851999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.541 [2024-07-15 02:21:48.852029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.541 [2024-07-15 02:21:48.852095] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.852102] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.852105] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852110] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b71f60) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.852119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.541 [2024-07-15 02:21:48.852140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.541 [2024-07-15 02:21:48.852160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.541 [2024-07-15 02:21:48.852180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.541 [2024-07-15 02:21:48.852198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:49.541 [2024-07-15 02:21:48.852212] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:49.541 [2024-07-15 02:21:48.852219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.541 [2024-07-15 02:21:48.852256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b71f60, cid 0, qid 0 00:19:49.541 [2024-07-15 02:21:48.852263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b720c0, cid 1, qid 0 00:19:49.541 [2024-07-15 02:21:48.852268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72220, cid 2, qid 0 00:19:49.541 [2024-07-15 02:21:48.852273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.541 [2024-07-15 02:21:48.852278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b724e0, cid 4, qid 0 00:19:49.541 [2024-07-15 02:21:48.852374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.852381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.852385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b724e0) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.852395] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:49.541 [2024-07-15 02:21:48.852401] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:49.541 [2024-07-15 02:21:48.852412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.541 [2024-07-15 02:21:48.852447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b724e0, cid 4, qid 0 00:19:49.541 [2024-07-15 02:21:48.852511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.541 [2024-07-15 02:21:48.852518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.541 [2024-07-15 02:21:48.852522] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852526] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b3b6c0): datao=0, datal=4096, cccid=4 00:19:49.541 [2024-07-15 02:21:48.852531] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b724e0) on tqpair(0x1b3b6c0): expected_datao=0, payload_size=4096 00:19:49.541 [2024-07-15 02:21:48.852539] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852543] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.852558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.852562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852566] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b724e0) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.852580] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:49.541 [2024-07-15 02:21:48.852648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852659] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.541 [2024-07-15 02:21:48.852680] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b3b6c0) 00:19:49.541 [2024-07-15 02:21:48.852694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.541 [2024-07-15 02:21:48.852726] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b724e0, cid 4, qid 0 00:19:49.541 [2024-07-15 02:21:48.852734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72640, cid 5, qid 0 00:19:49.541 [2024-07-15 02:21:48.852882] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.541 [2024-07-15 02:21:48.852901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.541 [2024-07-15 02:21:48.852906] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852910] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b3b6c0): datao=0, datal=1024, cccid=4 00:19:49.541 [2024-07-15 02:21:48.852915] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b724e0) on tqpair(0x1b3b6c0): expected_datao=0, payload_size=1024 00:19:49.541 [2024-07-15 02:21:48.852923] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852927] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.852939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.852943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.852947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72640) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.898676] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.541 [2024-07-15 02:21:48.898699] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.541 [2024-07-15 02:21:48.898721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.898726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b724e0) on tqpair=0x1b3b6c0 00:19:49.541 [2024-07-15 02:21:48.898742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.898747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.541 [2024-07-15 02:21:48.898750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b3b6c0) 00:19:49.542 [2024-07-15 02:21:48.898759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.542 [2024-07-15 02:21:48.898794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b724e0, cid 4, qid 0 00:19:49.542 [2024-07-15 02:21:48.898905] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.542 [2024-07-15 02:21:48.898913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.542 [2024-07-15 02:21:48.898918] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898922] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b3b6c0): datao=0, datal=3072, cccid=4 00:19:49.542 [2024-07-15 02:21:48.898927] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b724e0) on tqpair(0x1b3b6c0): expected_datao=0, payload_size=3072 00:19:49.542 [2024-07-15 02:21:48.898935] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898940] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898948] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.542 [2024-07-15 02:21:48.898955] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.542 [2024-07-15 02:21:48.898959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898963] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b724e0) on tqpair=0x1b3b6c0 00:19:49.542 [2024-07-15 02:21:48.898974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.898983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b3b6c0) 00:19:49.542 [2024-07-15 02:21:48.898990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.542 [2024-07-15 02:21:48.899017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b724e0, cid 4, qid 0 00:19:49.542 [2024-07-15 02:21:48.899090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.542 [2024-07-15 02:21:48.899097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.542 [2024-07-15 02:21:48.899101] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.899105] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b3b6c0): datao=0, datal=8, cccid=4 00:19:49.542 [2024-07-15 02:21:48.899110] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b724e0) on tqpair(0x1b3b6c0): expected_datao=0, payload_size=8 00:19:49.542 [2024-07-15 02:21:48.899117] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.899130] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.940664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.542 [2024-07-15 02:21:48.940693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.542 [2024-07-15 02:21:48.940715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.542 [2024-07-15 02:21:48.940720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b724e0) on tqpair=0x1b3b6c0 00:19:49.542 ===================================================== 00:19:49.542 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:49.542 ===================================================== 00:19:49.542 Controller Capabilities/Features 00:19:49.542 ================================ 00:19:49.542 Vendor ID: 0000 00:19:49.542 Subsystem Vendor ID: 0000 00:19:49.542 Serial Number: .................... 00:19:49.542 Model Number: ........................................ 00:19:49.542 Firmware Version: 24.01.1 00:19:49.542 Recommended Arb Burst: 0 00:19:49.542 IEEE OUI Identifier: 00 00 00 00:19:49.542 Multi-path I/O 00:19:49.542 May have multiple subsystem ports: No 00:19:49.542 May have multiple controllers: No 00:19:49.542 Associated with SR-IOV VF: No 00:19:49.542 Max Data Transfer Size: 131072 00:19:49.542 Max Number of Namespaces: 0 00:19:49.542 Max Number of I/O Queues: 1024 00:19:49.542 NVMe Specification Version (VS): 1.3 00:19:49.542 NVMe Specification Version (Identify): 1.3 00:19:49.542 Maximum Queue Entries: 128 00:19:49.542 Contiguous Queues Required: Yes 00:19:49.542 Arbitration Mechanisms Supported 00:19:49.542 Weighted Round Robin: Not Supported 00:19:49.542 Vendor Specific: Not Supported 00:19:49.542 Reset Timeout: 15000 ms 00:19:49.542 Doorbell Stride: 4 bytes 00:19:49.542 NVM Subsystem Reset: Not Supported 00:19:49.542 Command Sets Supported 00:19:49.542 NVM Command Set: Supported 00:19:49.542 Boot Partition: Not Supported 00:19:49.542 Memory Page Size Minimum: 4096 bytes 00:19:49.542 Memory Page Size Maximum: 4096 bytes 00:19:49.542 Persistent Memory Region: Not Supported 00:19:49.542 Optional Asynchronous Events Supported 00:19:49.542 Namespace Attribute Notices: Not Supported 00:19:49.542 Firmware Activation Notices: Not Supported 00:19:49.542 ANA Change Notices: Not Supported 00:19:49.542 PLE Aggregate Log Change Notices: Not Supported 00:19:49.542 LBA Status Info Alert Notices: Not Supported 00:19:49.542 EGE Aggregate Log Change Notices: Not Supported 00:19:49.542 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.542 Zone Descriptor Change Notices: Not Supported 00:19:49.542 Discovery Log Change Notices: Supported 00:19:49.542 Controller Attributes 00:19:49.542 128-bit Host Identifier: Not Supported 00:19:49.542 Non-Operational Permissive Mode: Not Supported 00:19:49.542 NVM Sets: Not Supported 00:19:49.542 Read Recovery Levels: Not Supported 00:19:49.542 Endurance Groups: Not Supported 00:19:49.542 Predictable Latency Mode: Not Supported 00:19:49.542 Traffic Based Keep ALive: Not Supported 00:19:49.542 Namespace Granularity: Not Supported 00:19:49.542 SQ Associations: Not Supported 00:19:49.542 UUID List: Not Supported 00:19:49.542 Multi-Domain Subsystem: Not Supported 00:19:49.542 Fixed Capacity Management: Not Supported 00:19:49.542 Variable Capacity Management: Not Supported 00:19:49.542 Delete Endurance Group: Not Supported 00:19:49.542 Delete NVM Set: Not Supported 00:19:49.542 Extended LBA Formats Supported: Not Supported 00:19:49.542 Flexible Data Placement Supported: Not Supported 00:19:49.542 00:19:49.542 Controller Memory Buffer Support 00:19:49.542 ================================ 00:19:49.542 Supported: No 00:19:49.542 00:19:49.542 Persistent Memory Region Support 00:19:49.542 ================================ 00:19:49.542 Supported: No 00:19:49.542 00:19:49.542 Admin Command Set Attributes 00:19:49.542 ============================ 00:19:49.542 Security Send/Receive: Not Supported 00:19:49.542 Format NVM: Not Supported 00:19:49.542 Firmware Activate/Download: Not Supported 00:19:49.542 Namespace Management: Not Supported 00:19:49.542 Device Self-Test: Not Supported 00:19:49.542 Directives: Not Supported 00:19:49.542 NVMe-MI: Not Supported 00:19:49.542 Virtualization Management: Not Supported 00:19:49.542 Doorbell Buffer Config: Not Supported 00:19:49.542 Get LBA Status Capability: Not Supported 00:19:49.542 Command & Feature Lockdown Capability: Not Supported 00:19:49.542 Abort Command Limit: 1 00:19:49.542 Async Event Request Limit: 4 00:19:49.542 Number of Firmware Slots: N/A 00:19:49.542 Firmware Slot 1 Read-Only: N/A 00:19:49.542 Firmware Activation Without Reset: N/A 00:19:49.542 Multiple Update Detection Support: N/A 00:19:49.542 Firmware Update Granularity: No Information Provided 00:19:49.542 Per-Namespace SMART Log: No 00:19:49.542 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.542 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:49.542 Command Effects Log Page: Not Supported 00:19:49.542 Get Log Page Extended Data: Supported 00:19:49.542 Telemetry Log Pages: Not Supported 00:19:49.542 Persistent Event Log Pages: Not Supported 00:19:49.542 Supported Log Pages Log Page: May Support 00:19:49.542 Commands Supported & Effects Log Page: Not Supported 00:19:49.542 Feature Identifiers & Effects Log Page:May Support 00:19:49.542 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.542 Data Area 4 for Telemetry Log: Not Supported 00:19:49.542 Error Log Page Entries Supported: 128 00:19:49.542 Keep Alive: Not Supported 00:19:49.542 00:19:49.542 NVM Command Set Attributes 00:19:49.542 ========================== 00:19:49.542 Submission Queue Entry Size 00:19:49.542 Max: 1 00:19:49.542 Min: 1 00:19:49.542 Completion Queue Entry Size 00:19:49.542 Max: 1 00:19:49.542 Min: 1 00:19:49.542 Number of Namespaces: 0 00:19:49.542 Compare Command: Not Supported 00:19:49.542 Write Uncorrectable Command: Not Supported 00:19:49.542 Dataset Management Command: Not Supported 00:19:49.542 Write Zeroes Command: Not Supported 00:19:49.542 Set Features Save Field: Not Supported 00:19:49.542 Reservations: Not Supported 00:19:49.542 Timestamp: Not Supported 00:19:49.542 Copy: Not Supported 00:19:49.542 Volatile Write Cache: Not Present 00:19:49.542 Atomic Write Unit (Normal): 1 00:19:49.542 Atomic Write Unit (PFail): 1 00:19:49.542 Atomic Compare & Write Unit: 1 00:19:49.542 Fused Compare & Write: Supported 00:19:49.542 Scatter-Gather List 00:19:49.542 SGL Command Set: Supported 00:19:49.542 SGL Keyed: Supported 00:19:49.542 SGL Bit Bucket Descriptor: Not Supported 00:19:49.542 SGL Metadata Pointer: Not Supported 00:19:49.542 Oversized SGL: Not Supported 00:19:49.542 SGL Metadata Address: Not Supported 00:19:49.542 SGL Offset: Supported 00:19:49.542 Transport SGL Data Block: Not Supported 00:19:49.542 Replay Protected Memory Block: Not Supported 00:19:49.542 00:19:49.542 Firmware Slot Information 00:19:49.542 ========================= 00:19:49.542 Active slot: 0 00:19:49.542 00:19:49.542 00:19:49.542 Error Log 00:19:49.542 ========= 00:19:49.542 00:19:49.542 Active Namespaces 00:19:49.542 ================= 00:19:49.542 Discovery Log Page 00:19:49.542 ================== 00:19:49.542 Generation Counter: 2 00:19:49.542 Number of Records: 2 00:19:49.542 Record Format: 0 00:19:49.542 00:19:49.542 Discovery Log Entry 0 00:19:49.542 ---------------------- 00:19:49.542 Transport Type: 3 (TCP) 00:19:49.542 Address Family: 1 (IPv4) 00:19:49.542 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:49.543 Entry Flags: 00:19:49.543 Duplicate Returned Information: 1 00:19:49.543 Explicit Persistent Connection Support for Discovery: 1 00:19:49.543 Transport Requirements: 00:19:49.543 Secure Channel: Not Required 00:19:49.543 Port ID: 0 (0x0000) 00:19:49.543 Controller ID: 65535 (0xffff) 00:19:49.543 Admin Max SQ Size: 128 00:19:49.543 Transport Service Identifier: 4420 00:19:49.543 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:49.543 Transport Address: 10.0.0.2 00:19:49.543 Discovery Log Entry 1 00:19:49.543 ---------------------- 00:19:49.543 Transport Type: 3 (TCP) 00:19:49.543 Address Family: 1 (IPv4) 00:19:49.543 Subsystem Type: 2 (NVM Subsystem) 00:19:49.543 Entry Flags: 00:19:49.543 Duplicate Returned Information: 0 00:19:49.543 Explicit Persistent Connection Support for Discovery: 0 00:19:49.543 Transport Requirements: 00:19:49.543 Secure Channel: Not Required 00:19:49.543 Port ID: 0 (0x0000) 00:19:49.543 Controller ID: 65535 (0xffff) 00:19:49.543 Admin Max SQ Size: 128 00:19:49.543 Transport Service Identifier: 4420 00:19:49.543 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:49.543 Transport Address: 10.0.0.2 [2024-07-15 02:21:48.940905] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:49.543 [2024-07-15 02:21:48.940929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 02:21:48.940937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 02:21:48.940944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 02:21:48.940950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 02:21:48.940963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.940968] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.940972] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.940981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941012] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941083] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941150] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941241] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941250] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941262] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:49.543 [2024-07-15 02:21:48.941267] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:49.543 [2024-07-15 02:21:48.941279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941317] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941377] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941386] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941390] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941394] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941512] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941517] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941521] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941542] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941668] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941723] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.941902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.941909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.941912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.941928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.941936] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.941943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.941961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.942015] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.942027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.942031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.942048] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942056] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.942064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.942082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.942136] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.543 [2024-07-15 02:21:48.942143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.543 [2024-07-15 02:21:48.942147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.543 [2024-07-15 02:21:48.942162] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.543 [2024-07-15 02:21:48.942171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.543 [2024-07-15 02:21:48.942178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 02:21:48.942197] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.543 [2024-07-15 02:21:48.942252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.544 [2024-07-15 02:21:48.942259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.544 [2024-07-15 02:21:48.942263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.544 [2024-07-15 02:21:48.942278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.544 [2024-07-15 02:21:48.942294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 02:21:48.942312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.544 [2024-07-15 02:21:48.942363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.544 [2024-07-15 02:21:48.942374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.544 [2024-07-15 02:21:48.942379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.544 [2024-07-15 02:21:48.942395] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.544 [2024-07-15 02:21:48.942411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 02:21:48.942430] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.544 [2024-07-15 02:21:48.942487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.544 [2024-07-15 02:21:48.942498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.544 [2024-07-15 02:21:48.942502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.544 [2024-07-15 02:21:48.942518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.942527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.544 [2024-07-15 02:21:48.942534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 02:21:48.942553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.544 [2024-07-15 02:21:48.946619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.544 [2024-07-15 02:21:48.946643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.544 [2024-07-15 02:21:48.946649] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.946653] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.544 [2024-07-15 02:21:48.946670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.946675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.946679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b3b6c0) 00:19:49.544 [2024-07-15 02:21:48.946688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 02:21:48.946717] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b72380, cid 3, qid 0 00:19:49.544 [2024-07-15 02:21:48.946776] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.544 [2024-07-15 02:21:48.946784] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.544 [2024-07-15 02:21:48.946787] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.544 [2024-07-15 02:21:48.946792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b72380) on tqpair=0x1b3b6c0 00:19:49.544 [2024-07-15 02:21:48.946801] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:49.544 00:19:49.544 02:21:48 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:49.544 [2024-07-15 02:21:48.981950] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:49.544 [2024-07-15 02:21:48.982000] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92610 ] 00:19:49.808 [2024-07-15 02:21:49.120731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:49.808 [2024-07-15 02:21:49.120815] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:49.808 [2024-07-15 02:21:49.120823] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:49.808 [2024-07-15 02:21:49.120835] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:49.808 [2024-07-15 02:21:49.120845] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:49.808 [2024-07-15 02:21:49.120985] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:49.808 [2024-07-15 02:21:49.121039] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19016c0 0 00:19:49.808 [2024-07-15 02:21:49.127631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:49.808 [2024-07-15 02:21:49.127672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:49.808 [2024-07-15 02:21:49.127694] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:49.808 [2024-07-15 02:21:49.127698] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:49.808 [2024-07-15 02:21:49.127748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.127755] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.127759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.127773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:49.808 [2024-07-15 02:21:49.127806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.138651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.138676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.138682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.138702] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:49.808 [2024-07-15 02:21:49.138711] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:49.808 [2024-07-15 02:21:49.138718] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:49.808 [2024-07-15 02:21:49.138734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.138754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.138786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.138857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.138864] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.138867] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.138878] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:49.808 [2024-07-15 02:21:49.138886] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:49.808 [2024-07-15 02:21:49.138894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138898] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.138902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.138910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.138934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.138990] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.138996] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.139000] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.139011] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:49.808 [2024-07-15 02:21:49.139020] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.139043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.139061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.139115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.139122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.139126] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139130] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.139137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139155] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.139162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.139180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.139238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.139245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.139248] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.139258] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:49.808 [2024-07-15 02:21:49.139264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139272] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139377] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:49.808 [2024-07-15 02:21:49.139382] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139396] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.139407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.139425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.139485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.808 [2024-07-15 02:21:49.139492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.808 [2024-07-15 02:21:49.139496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.808 [2024-07-15 02:21:49.139506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:49.808 [2024-07-15 02:21:49.139517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.808 [2024-07-15 02:21:49.139525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.808 [2024-07-15 02:21:49.139532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 02:21:49.139550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.808 [2024-07-15 02:21:49.139621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.139631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.139634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139639] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.139644] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:49.809 [2024-07-15 02:21:49.139650] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.139659] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:49.809 [2024-07-15 02:21:49.139675] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.139685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139693] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.139701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.809 [2024-07-15 02:21:49.139724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.809 [2024-07-15 02:21:49.139828] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.809 [2024-07-15 02:21:49.139835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.809 [2024-07-15 02:21:49.139839] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139843] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=4096, cccid=0 00:19:49.809 [2024-07-15 02:21:49.139848] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1937f60) on tqpair(0x19016c0): expected_datao=0, payload_size=4096 00:19:49.809 [2024-07-15 02:21:49.139857] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139862] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.139877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.139880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.139894] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:49.809 [2024-07-15 02:21:49.139900] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:49.809 [2024-07-15 02:21:49.139905] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:49.809 [2024-07-15 02:21:49.139910] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:49.809 [2024-07-15 02:21:49.139915] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:49.809 [2024-07-15 02:21:49.139920] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.139934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.139942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.139951] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.139958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.809 [2024-07-15 02:21:49.139979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.809 [2024-07-15 02:21:49.140044] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.140051] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.140054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1937f60) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.140068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140075] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.809 [2024-07-15 02:21:49.140089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.809 [2024-07-15 02:21:49.140108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.809 [2024-07-15 02:21:49.140128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140136] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.809 [2024-07-15 02:21:49.140147] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.809 [2024-07-15 02:21:49.140202] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1937f60, cid 0, qid 0 00:19:49.809 [2024-07-15 02:21:49.140209] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19380c0, cid 1, qid 0 00:19:49.809 [2024-07-15 02:21:49.140214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938220, cid 2, qid 0 00:19:49.809 [2024-07-15 02:21:49.140219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.809 [2024-07-15 02:21:49.140224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.809 [2024-07-15 02:21:49.140325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.140332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.140336] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.140346] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:49.809 [2024-07-15 02:21:49.140352] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140361] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.809 [2024-07-15 02:21:49.140413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.809 [2024-07-15 02:21:49.140479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.140486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.140490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140494] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.140557] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140578] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.809 [2024-07-15 02:21:49.140616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.809 [2024-07-15 02:21:49.140640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.809 [2024-07-15 02:21:49.140712] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.809 [2024-07-15 02:21:49.140726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.809 [2024-07-15 02:21:49.140731] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140735] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=4096, cccid=4 00:19:49.809 [2024-07-15 02:21:49.140740] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19384e0) on tqpair(0x19016c0): expected_datao=0, payload_size=4096 00:19:49.809 [2024-07-15 02:21:49.140749] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140753] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.809 [2024-07-15 02:21:49.140768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.809 [2024-07-15 02:21:49.140771] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.809 [2024-07-15 02:21:49.140775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.809 [2024-07-15 02:21:49.140793] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:49.809 [2024-07-15 02:21:49.140804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140815] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:49.809 [2024-07-15 02:21:49.140822] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140827] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.140838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.140859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.810 [2024-07-15 02:21:49.140939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.810 [2024-07-15 02:21:49.140946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.810 [2024-07-15 02:21:49.140949] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140953] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=4096, cccid=4 00:19:49.810 [2024-07-15 02:21:49.140958] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19384e0) on tqpair(0x19016c0): expected_datao=0, payload_size=4096 00:19:49.810 [2024-07-15 02:21:49.140966] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140970] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.140984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.140988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.140992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141008] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141019] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.810 [2024-07-15 02:21:49.141133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.810 [2024-07-15 02:21:49.141140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.810 [2024-07-15 02:21:49.141143] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141147] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=4096, cccid=4 00:19:49.810 [2024-07-15 02:21:49.141152] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19384e0) on tqpair(0x19016c0): expected_datao=0, payload_size=4096 00:19:49.810 [2024-07-15 02:21:49.141160] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141164] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141195] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141204] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141222] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141233] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:49.810 [2024-07-15 02:21:49.141238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:49.810 [2024-07-15 02:21:49.141243] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:49.810 [2024-07-15 02:21:49.141278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141309] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.810 [2024-07-15 02:21:49.141355] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.810 [2024-07-15 02:21:49.141362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938640, cid 5, qid 0 00:19:49.810 [2024-07-15 02:21:49.141445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141481] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938640) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141496] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938640, cid 5, qid 0 00:19:49.810 [2024-07-15 02:21:49.141585] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938640) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938640, cid 5, qid 0 00:19:49.810 [2024-07-15 02:21:49.141731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938640) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938640, cid 5, qid 0 00:19:49.810 [2024-07-15 02:21:49.141848] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.810 [2024-07-15 02:21:49.141861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.810 [2024-07-15 02:21:49.141865] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938640) on tqpair=0x19016c0 00:19:49.810 [2024-07-15 02:21:49.141885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.810 [2024-07-15 02:21:49.141960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19016c0) 00:19:49.810 [2024-07-15 02:21:49.141966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.810 [2024-07-15 02:21:49.141987] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938640, cid 5, qid 0 00:19:49.810 [2024-07-15 02:21:49.141994] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19384e0, cid 4, qid 0 00:19:49.810 [2024-07-15 02:21:49.141999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19387a0, cid 6, qid 0 00:19:49.810 [2024-07-15 02:21:49.142003] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938900, cid 7, qid 0 00:19:49.810 [2024-07-15 02:21:49.142153] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.810 [2024-07-15 02:21:49.142161] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.810 [2024-07-15 02:21:49.142165] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142169] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=8192, cccid=5 00:19:49.811 [2024-07-15 02:21:49.142174] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1938640) on tqpair(0x19016c0): expected_datao=0, payload_size=8192 00:19:49.811 [2024-07-15 02:21:49.142192] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142196] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.811 [2024-07-15 02:21:49.142208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.811 [2024-07-15 02:21:49.142212] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142216] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=512, cccid=4 00:19:49.811 [2024-07-15 02:21:49.142220] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19384e0) on tqpair(0x19016c0): expected_datao=0, payload_size=512 00:19:49.811 [2024-07-15 02:21:49.142228] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142231] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.811 [2024-07-15 02:21:49.142243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.811 [2024-07-15 02:21:49.142246] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142250] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=512, cccid=6 00:19:49.811 [2024-07-15 02:21:49.142254] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19387a0) on tqpair(0x19016c0): expected_datao=0, payload_size=512 00:19:49.811 [2024-07-15 02:21:49.142261] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142265] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.811 [2024-07-15 02:21:49.142277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.811 [2024-07-15 02:21:49.142280] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142284] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19016c0): datao=0, datal=4096, cccid=7 00:19:49.811 [2024-07-15 02:21:49.142288] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1938900) on tqpair(0x19016c0): expected_datao=0, payload_size=4096 00:19:49.811 [2024-07-15 02:21:49.142296] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142300] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.811 [2024-07-15 02:21:49.142315] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.811 [2024-07-15 02:21:49.142318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938640) on tqpair=0x19016c0 00:19:49.811 ===================================================== 00:19:49.811 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.811 ===================================================== 00:19:49.811 Controller Capabilities/Features 00:19:49.811 ================================ 00:19:49.811 Vendor ID: 8086 00:19:49.811 Subsystem Vendor ID: 8086 00:19:49.811 Serial Number: SPDK00000000000001 00:19:49.811 Model Number: SPDK bdev Controller 00:19:49.811 Firmware Version: 24.01.1 00:19:49.811 Recommended Arb Burst: 6 00:19:49.811 IEEE OUI Identifier: e4 d2 5c 00:19:49.811 Multi-path I/O 00:19:49.811 May have multiple subsystem ports: Yes 00:19:49.811 May have multiple controllers: Yes 00:19:49.811 Associated with SR-IOV VF: No 00:19:49.811 Max Data Transfer Size: 131072 00:19:49.811 Max Number of Namespaces: 32 00:19:49.811 Max Number of I/O Queues: 127 00:19:49.811 NVMe Specification Version (VS): 1.3 00:19:49.811 NVMe Specification Version (Identify): 1.3 00:19:49.811 Maximum Queue Entries: 128 00:19:49.811 Contiguous Queues Required: Yes 00:19:49.811 Arbitration Mechanisms Supported 00:19:49.811 Weighted Round Robin: Not Supported 00:19:49.811 Vendor Specific: Not Supported 00:19:49.811 Reset Timeout: 15000 ms 00:19:49.811 Doorbell Stride: 4 bytes 00:19:49.811 NVM Subsystem Reset: Not Supported 00:19:49.811 Command Sets Supported 00:19:49.811 NVM Command Set: Supported 00:19:49.811 Boot Partition: Not Supported 00:19:49.811 Memory Page Size Minimum: 4096 bytes 00:19:49.811 Memory Page Size Maximum: 4096 bytes 00:19:49.811 Persistent Memory Region: Not Supported 00:19:49.811 Optional Asynchronous Events Supported 00:19:49.811 Namespace Attribute Notices: Supported 00:19:49.811 Firmware Activation Notices: Not Supported 00:19:49.811 ANA Change Notices: Not Supported 00:19:49.811 PLE Aggregate Log Change Notices: Not Supported 00:19:49.811 LBA Status Info Alert Notices: Not Supported 00:19:49.811 EGE Aggregate Log Change Notices: Not Supported 00:19:49.811 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.811 Zone Descriptor Change Notices: Not Supported 00:19:49.811 Discovery Log Change Notices: Not Supported 00:19:49.811 Controller Attributes 00:19:49.811 128-bit Host Identifier: Supported 00:19:49.811 Non-Operational Permissive Mode: Not Supported 00:19:49.811 NVM Sets: Not Supported 00:19:49.811 Read Recovery Levels: Not Supported 00:19:49.811 Endurance Groups: Not Supported 00:19:49.811 Predictable Latency Mode: Not Supported 00:19:49.811 Traffic Based Keep ALive: Not Supported 00:19:49.811 Namespace Granularity: Not Supported 00:19:49.811 SQ Associations: Not Supported 00:19:49.811 UUID List: Not Supported 00:19:49.811 Multi-Domain Subsystem: Not Supported 00:19:49.811 Fixed Capacity Management: Not Supported 00:19:49.811 Variable Capacity Management: Not Supported 00:19:49.811 Delete Endurance Group: Not Supported 00:19:49.811 Delete NVM Set: Not Supported 00:19:49.811 Extended LBA Formats Supported: Not Supported 00:19:49.811 Flexible Data Placement Supported: Not Supported 00:19:49.811 00:19:49.811 Controller Memory Buffer Support 00:19:49.811 ================================ 00:19:49.811 Supported: No 00:19:49.811 00:19:49.811 Persistent Memory Region Support 00:19:49.811 ================================ 00:19:49.811 Supported: No 00:19:49.811 00:19:49.811 Admin Command Set Attributes 00:19:49.811 ============================ 00:19:49.811 Security Send/Receive: Not Supported 00:19:49.811 Format NVM: Not Supported 00:19:49.811 Firmware Activate/Download: Not Supported 00:19:49.811 Namespace Management: Not Supported 00:19:49.811 Device Self-Test: Not Supported 00:19:49.811 Directives: Not Supported 00:19:49.811 NVMe-MI: Not Supported 00:19:49.811 Virtualization Management: Not Supported 00:19:49.811 Doorbell Buffer Config: Not Supported 00:19:49.811 Get LBA Status Capability: Not Supported 00:19:49.811 Command & Feature Lockdown Capability: Not Supported 00:19:49.811 Abort Command Limit: 4 00:19:49.811 Async Event Request Limit: 4 00:19:49.811 Number of Firmware Slots: N/A 00:19:49.811 Firmware Slot 1 Read-Only: N/A 00:19:49.811 Firmware Activation Without Reset: [2024-07-15 02:21:49.142348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.811 [2024-07-15 02:21:49.142355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.811 [2024-07-15 02:21:49.142358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19384e0) on tqpair=0x19016c0 00:19:49.811 [2024-07-15 02:21:49.142374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.811 [2024-07-15 02:21:49.142380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.811 [2024-07-15 02:21:49.142383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19387a0) on tqpair=0x19016c0 00:19:49.811 [2024-07-15 02:21:49.142396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.811 [2024-07-15 02:21:49.142401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.811 [2024-07-15 02:21:49.142405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.811 [2024-07-15 02:21:49.142409] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938900) on tqpair=0x19016c0 00:19:49.811 N/A 00:19:49.811 Multiple Update Detection Support: N/A 00:19:49.811 Firmware Update Granularity: No Information Provided 00:19:49.811 Per-Namespace SMART Log: No 00:19:49.811 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.811 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:49.811 Command Effects Log Page: Supported 00:19:49.811 Get Log Page Extended Data: Supported 00:19:49.811 Telemetry Log Pages: Not Supported 00:19:49.811 Persistent Event Log Pages: Not Supported 00:19:49.811 Supported Log Pages Log Page: May Support 00:19:49.811 Commands Supported & Effects Log Page: Not Supported 00:19:49.811 Feature Identifiers & Effects Log Page:May Support 00:19:49.811 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.811 Data Area 4 for Telemetry Log: Not Supported 00:19:49.811 Error Log Page Entries Supported: 128 00:19:49.811 Keep Alive: Supported 00:19:49.811 Keep Alive Granularity: 10000 ms 00:19:49.811 00:19:49.811 NVM Command Set Attributes 00:19:49.811 ========================== 00:19:49.811 Submission Queue Entry Size 00:19:49.811 Max: 64 00:19:49.811 Min: 64 00:19:49.811 Completion Queue Entry Size 00:19:49.811 Max: 16 00:19:49.811 Min: 16 00:19:49.811 Number of Namespaces: 32 00:19:49.811 Compare Command: Supported 00:19:49.811 Write Uncorrectable Command: Not Supported 00:19:49.811 Dataset Management Command: Supported 00:19:49.811 Write Zeroes Command: Supported 00:19:49.811 Set Features Save Field: Not Supported 00:19:49.811 Reservations: Supported 00:19:49.811 Timestamp: Not Supported 00:19:49.811 Copy: Supported 00:19:49.811 Volatile Write Cache: Present 00:19:49.811 Atomic Write Unit (Normal): 1 00:19:49.811 Atomic Write Unit (PFail): 1 00:19:49.811 Atomic Compare & Write Unit: 1 00:19:49.811 Fused Compare & Write: Supported 00:19:49.811 Scatter-Gather List 00:19:49.811 SGL Command Set: Supported 00:19:49.811 SGL Keyed: Supported 00:19:49.811 SGL Bit Bucket Descriptor: Not Supported 00:19:49.812 SGL Metadata Pointer: Not Supported 00:19:49.812 Oversized SGL: Not Supported 00:19:49.812 SGL Metadata Address: Not Supported 00:19:49.812 SGL Offset: Supported 00:19:49.812 Transport SGL Data Block: Not Supported 00:19:49.812 Replay Protected Memory Block: Not Supported 00:19:49.812 00:19:49.812 Firmware Slot Information 00:19:49.812 ========================= 00:19:49.812 Active slot: 1 00:19:49.812 Slot 1 Firmware Revision: 24.01.1 00:19:49.812 00:19:49.812 00:19:49.812 Commands Supported and Effects 00:19:49.812 ============================== 00:19:49.812 Admin Commands 00:19:49.812 -------------- 00:19:49.812 Get Log Page (02h): Supported 00:19:49.812 Identify (06h): Supported 00:19:49.812 Abort (08h): Supported 00:19:49.812 Set Features (09h): Supported 00:19:49.812 Get Features (0Ah): Supported 00:19:49.812 Asynchronous Event Request (0Ch): Supported 00:19:49.812 Keep Alive (18h): Supported 00:19:49.812 I/O Commands 00:19:49.812 ------------ 00:19:49.812 Flush (00h): Supported LBA-Change 00:19:49.812 Write (01h): Supported LBA-Change 00:19:49.812 Read (02h): Supported 00:19:49.812 Compare (05h): Supported 00:19:49.812 Write Zeroes (08h): Supported LBA-Change 00:19:49.812 Dataset Management (09h): Supported LBA-Change 00:19:49.812 Copy (19h): Supported LBA-Change 00:19:49.812 Unknown (79h): Supported LBA-Change 00:19:49.812 Unknown (7Ah): Supported 00:19:49.812 00:19:49.812 Error Log 00:19:49.812 ========= 00:19:49.812 00:19:49.812 Arbitration 00:19:49.812 =========== 00:19:49.812 Arbitration Burst: 1 00:19:49.812 00:19:49.812 Power Management 00:19:49.812 ================ 00:19:49.812 Number of Power States: 1 00:19:49.812 Current Power State: Power State #0 00:19:49.812 Power State #0: 00:19:49.812 Max Power: 0.00 W 00:19:49.812 Non-Operational State: Operational 00:19:49.812 Entry Latency: Not Reported 00:19:49.812 Exit Latency: Not Reported 00:19:49.812 Relative Read Throughput: 0 00:19:49.812 Relative Read Latency: 0 00:19:49.812 Relative Write Throughput: 0 00:19:49.812 Relative Write Latency: 0 00:19:49.812 Idle Power: Not Reported 00:19:49.812 Active Power: Not Reported 00:19:49.812 Non-Operational Permissive Mode: Not Supported 00:19:49.812 00:19:49.812 Health Information 00:19:49.812 ================== 00:19:49.812 Critical Warnings: 00:19:49.812 Available Spare Space: OK 00:19:49.812 Temperature: OK 00:19:49.812 Device Reliability: OK 00:19:49.812 Read Only: No 00:19:49.812 Volatile Memory Backup: OK 00:19:49.812 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:49.812 Temperature Threshold: [2024-07-15 02:21:49.142531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.142540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.142544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.142552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.812 [2024-07-15 02:21:49.142576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938900, cid 7, qid 0 00:19:49.812 [2024-07-15 02:21:49.146665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.812 [2024-07-15 02:21:49.146683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.812 [2024-07-15 02:21:49.146688] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146693] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938900) on tqpair=0x19016c0 00:19:49.812 [2024-07-15 02:21:49.146751] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:49.812 [2024-07-15 02:21:49.146772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.812 [2024-07-15 02:21:49.146781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.812 [2024-07-15 02:21:49.146787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.812 [2024-07-15 02:21:49.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.812 [2024-07-15 02:21:49.146804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146809] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.146821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.812 [2024-07-15 02:21:49.146852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.812 [2024-07-15 02:21:49.146926] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.812 [2024-07-15 02:21:49.146933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.812 [2024-07-15 02:21:49.146937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.812 [2024-07-15 02:21:49.146950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146954] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.146957] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.146965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.812 [2024-07-15 02:21:49.146992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.812 [2024-07-15 02:21:49.147069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.812 [2024-07-15 02:21:49.147075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.812 [2024-07-15 02:21:49.147079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.812 [2024-07-15 02:21:49.147089] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:49.812 [2024-07-15 02:21:49.147095] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:49.812 [2024-07-15 02:21:49.147105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147109] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147113] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.147121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.812 [2024-07-15 02:21:49.147138] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.812 [2024-07-15 02:21:49.147195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.812 [2024-07-15 02:21:49.147202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.812 [2024-07-15 02:21:49.147206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.812 [2024-07-15 02:21:49.147221] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.147237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.812 [2024-07-15 02:21:49.147254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.812 [2024-07-15 02:21:49.147308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.812 [2024-07-15 02:21:49.147314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.812 [2024-07-15 02:21:49.147318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.812 [2024-07-15 02:21:49.147333] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.812 [2024-07-15 02:21:49.147341] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.812 [2024-07-15 02:21:49.147348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.147428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.147435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.147438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.147454] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147458] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147462] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.147469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.147541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.147547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.147551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147555] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.147566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.147582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147613] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.147675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.147682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.147686] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147690] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.147701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147706] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147710] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.147717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147737] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.147792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.147798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.147802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147806] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.147817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147825] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.147833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.147907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.147913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.147917] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.147932] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147936] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.147940] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.147947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.147965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148165] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148322] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148395] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148641] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148674] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148682] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148785] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.813 [2024-07-15 02:21:49.148812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.813 [2024-07-15 02:21:49.148830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.813 [2024-07-15 02:21:49.148886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.813 [2024-07-15 02:21:49.148893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.813 [2024-07-15 02:21:49.148896] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148900] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.813 [2024-07-15 02:21:49.148911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.813 [2024-07-15 02:21:49.148916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.148920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.148927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.148944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.148999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149029] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149252] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149339] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149373] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149565] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149571] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149575] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149590] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149841] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.149892] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.149951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.149958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.149961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.149977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.149985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.149992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.150010] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.150067] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.150074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.150077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150081] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.150092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150101] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.150108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.150125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.150180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.150186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.150190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.150205] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150210] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150213] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.150221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.150238] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.150292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.150298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.150302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.814 [2024-07-15 02:21:49.150317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.814 [2024-07-15 02:21:49.150333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.814 [2024-07-15 02:21:49.150350] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.814 [2024-07-15 02:21:49.150404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.814 [2024-07-15 02:21:49.150415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.814 [2024-07-15 02:21:49.150419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.814 [2024-07-15 02:21:49.150423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.150435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.150451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.150469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.150523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.150534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.150538] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150543] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.150554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.150570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.150588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.150661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.150669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.150673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.150689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.150704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.150724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.150781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.150788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.150792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.150807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.150822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.150840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.150895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.150902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.150905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.150920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.150929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.150936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.150953] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151043] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151187] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151362] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151465] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151475] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151491] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151499] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151523] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151608] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151661] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151724] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151751] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151776] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.815 [2024-07-15 02:21:49.151840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.815 [2024-07-15 02:21:49.151843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.815 [2024-07-15 02:21:49.151858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.815 [2024-07-15 02:21:49.151867] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.815 [2024-07-15 02:21:49.151874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.815 [2024-07-15 02:21:49.151891] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.815 [2024-07-15 02:21:49.151949] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.151956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.151960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.151964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.151975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.151979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.151983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.151990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152088] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152294] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152301] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152352] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152423] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152534] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152550] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152554] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152558] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152677] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152771] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152782] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152786] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152797] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152801] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152805] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152830] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.152893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.152897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.152912] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152916] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.152920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.152928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.152945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.152998] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.153005] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.153009] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153013] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.153024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.153047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.153065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.153122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.816 [2024-07-15 02:21:49.153129] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.816 [2024-07-15 02:21:49.153133] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.816 [2024-07-15 02:21:49.153148] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.816 [2024-07-15 02:21:49.153156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.816 [2024-07-15 02:21:49.153164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.816 [2024-07-15 02:21:49.153181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.816 [2024-07-15 02:21:49.153234] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153279] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153493] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153497] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153628] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153633] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153637] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153654] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153685] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153746] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153778] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.153883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.153887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.153903] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.153912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.153919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.153937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.153992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154007] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154011] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.154023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154027] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.154039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.154057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.154108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.154133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154137] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.154148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.154166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.154218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.154248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154253] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.154264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.154282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.154340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.154365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154370] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.154381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.154399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.154454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.154480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.154496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.154514] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.154569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.154580] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.154584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.154588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.161639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.161658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.161663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19016c0) 00:19:49.817 [2024-07-15 02:21:49.161672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.817 [2024-07-15 02:21:49.161700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1938380, cid 3, qid 0 00:19:49.817 [2024-07-15 02:21:49.161765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.817 [2024-07-15 02:21:49.161772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.817 [2024-07-15 02:21:49.161776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.817 [2024-07-15 02:21:49.161780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1938380) on tqpair=0x19016c0 00:19:49.817 [2024-07-15 02:21:49.161790] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 14 milliseconds 00:19:49.817 0 Kelvin (-273 Celsius) 00:19:49.817 Available Spare: 0% 00:19:49.817 Available Spare Threshold: 0% 00:19:49.817 Life Percentage Used: 0% 00:19:49.818 Data Units Read: 0 00:19:49.818 Data Units Written: 0 00:19:49.818 Host Read Commands: 0 00:19:49.818 Host Write Commands: 0 00:19:49.818 Controller Busy Time: 0 minutes 00:19:49.818 Power Cycles: 0 00:19:49.818 Power On Hours: 0 hours 00:19:49.818 Unsafe Shutdowns: 0 00:19:49.818 Unrecoverable Media Errors: 0 00:19:49.818 Lifetime Error Log Entries: 0 00:19:49.818 Warning Temperature Time: 0 minutes 00:19:49.818 Critical Temperature Time: 0 minutes 00:19:49.818 00:19:49.818 Number of Queues 00:19:49.818 ================ 00:19:49.818 Number of I/O Submission Queues: 127 00:19:49.818 Number of I/O Completion Queues: 127 00:19:49.818 00:19:49.818 Active Namespaces 00:19:49.818 ================= 00:19:49.818 Namespace ID:1 00:19:49.818 Error Recovery Timeout: Unlimited 00:19:49.818 Command Set Identifier: NVM (00h) 00:19:49.818 Deallocate: Supported 00:19:49.818 Deallocated/Unwritten Error: Not Supported 00:19:49.818 Deallocated Read Value: Unknown 00:19:49.818 Deallocate in Write Zeroes: Not Supported 00:19:49.818 Deallocated Guard Field: 0xFFFF 00:19:49.818 Flush: Supported 00:19:49.818 Reservation: Supported 00:19:49.818 Namespace Sharing Capabilities: Multiple Controllers 00:19:49.818 Size (in LBAs): 131072 (0GiB) 00:19:49.818 Capacity (in LBAs): 131072 (0GiB) 00:19:49.818 Utilization (in LBAs): 131072 (0GiB) 00:19:49.818 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:49.818 EUI64: ABCDEF0123456789 00:19:49.818 UUID: 2b80fb6b-19be-4a31-82b7-c04f8ba27190 00:19:49.818 Thin Provisioning: Not Supported 00:19:49.818 Per-NS Atomic Units: Yes 00:19:49.818 Atomic Boundary Size (Normal): 0 00:19:49.818 Atomic Boundary Size (PFail): 0 00:19:49.818 Atomic Boundary Offset: 0 00:19:49.818 Maximum Single Source Range Length: 65535 00:19:49.818 Maximum Copy Length: 65535 00:19:49.818 Maximum Source Range Count: 1 00:19:49.818 NGUID/EUI64 Never Reused: No 00:19:49.818 Namespace Write Protected: No 00:19:49.818 Number of LBA Formats: 1 00:19:49.818 Current LBA Format: LBA Format #00 00:19:49.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:49.818 00:19:49.818 02:21:49 -- host/identify.sh@51 -- # sync 00:19:49.818 02:21:49 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.818 02:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:49.818 02:21:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.818 02:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:49.818 02:21:49 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:49.818 02:21:49 -- host/identify.sh@56 -- # nvmftestfini 00:19:49.818 02:21:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:49.818 02:21:49 -- nvmf/common.sh@116 -- # sync 00:19:49.818 02:21:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:49.818 02:21:49 -- nvmf/common.sh@119 -- # set +e 00:19:49.818 02:21:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:49.818 02:21:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:49.818 rmmod nvme_tcp 00:19:49.818 rmmod nvme_fabrics 00:19:49.818 rmmod nvme_keyring 00:19:49.818 02:21:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:49.818 02:21:49 -- nvmf/common.sh@123 -- # set -e 00:19:49.818 02:21:49 -- nvmf/common.sh@124 -- # return 0 00:19:49.818 02:21:49 -- nvmf/common.sh@477 -- # '[' -n 92549 ']' 00:19:49.818 02:21:49 -- nvmf/common.sh@478 -- # killprocess 92549 00:19:49.818 02:21:49 -- common/autotest_common.sh@926 -- # '[' -z 92549 ']' 00:19:49.818 02:21:49 -- common/autotest_common.sh@930 -- # kill -0 92549 00:19:49.818 02:21:49 -- common/autotest_common.sh@931 -- # uname 00:19:49.818 02:21:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.818 02:21:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92549 00:19:49.818 02:21:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.818 02:21:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.818 02:21:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92549' 00:19:49.818 killing process with pid 92549 00:19:49.818 02:21:49 -- common/autotest_common.sh@945 -- # kill 92549 00:19:49.818 [2024-07-15 02:21:49.313380] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:49.818 02:21:49 -- common/autotest_common.sh@950 -- # wait 92549 00:19:50.077 02:21:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:50.077 02:21:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:50.077 02:21:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:50.077 02:21:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.077 02:21:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:50.077 02:21:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.077 02:21:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.077 02:21:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.077 02:21:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:50.077 00:19:50.077 real 0m2.471s 00:19:50.077 user 0m7.072s 00:19:50.077 sys 0m0.627s 00:19:50.077 02:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.077 02:21:49 -- common/autotest_common.sh@10 -- # set +x 00:19:50.077 ************************************ 00:19:50.077 END TEST nvmf_identify 00:19:50.077 ************************************ 00:19:50.077 02:21:49 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:50.077 02:21:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:50.077 02:21:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.077 02:21:49 -- common/autotest_common.sh@10 -- # set +x 00:19:50.077 ************************************ 00:19:50.077 START TEST nvmf_perf 00:19:50.077 ************************************ 00:19:50.077 02:21:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:50.336 * Looking for test storage... 00:19:50.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:50.336 02:21:49 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.336 02:21:49 -- nvmf/common.sh@7 -- # uname -s 00:19:50.336 02:21:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.336 02:21:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.336 02:21:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.336 02:21:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.336 02:21:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.336 02:21:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.336 02:21:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.336 02:21:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.336 02:21:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.336 02:21:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:50.336 02:21:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:19:50.336 02:21:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.336 02:21:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.336 02:21:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.336 02:21:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.336 02:21:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.336 02:21:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.336 02:21:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.336 02:21:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.336 02:21:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.336 02:21:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.336 02:21:49 -- paths/export.sh@5 -- # export PATH 00:19:50.336 02:21:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.336 02:21:49 -- nvmf/common.sh@46 -- # : 0 00:19:50.336 02:21:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:50.336 02:21:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:50.336 02:21:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:50.336 02:21:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.336 02:21:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.336 02:21:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:50.336 02:21:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:50.336 02:21:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:50.336 02:21:49 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:50.336 02:21:49 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:50.336 02:21:49 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:50.336 02:21:49 -- host/perf.sh@17 -- # nvmftestinit 00:19:50.336 02:21:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:50.336 02:21:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.336 02:21:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:50.336 02:21:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:50.336 02:21:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:50.336 02:21:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.336 02:21:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.336 02:21:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.336 02:21:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:50.336 02:21:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:50.336 02:21:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.336 02:21:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.336 02:21:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:50.336 02:21:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:50.336 02:21:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.336 02:21:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.336 02:21:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.336 02:21:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.336 02:21:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.336 02:21:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.336 02:21:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.336 02:21:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.336 02:21:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:50.336 02:21:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:50.336 Cannot find device "nvmf_tgt_br" 00:19:50.336 02:21:49 -- nvmf/common.sh@154 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.336 Cannot find device "nvmf_tgt_br2" 00:19:50.336 02:21:49 -- nvmf/common.sh@155 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:50.336 02:21:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:50.336 Cannot find device "nvmf_tgt_br" 00:19:50.336 02:21:49 -- nvmf/common.sh@157 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:50.336 Cannot find device "nvmf_tgt_br2" 00:19:50.336 02:21:49 -- nvmf/common.sh@158 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:50.336 02:21:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:50.336 02:21:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.336 02:21:49 -- nvmf/common.sh@161 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.336 02:21:49 -- nvmf/common.sh@162 -- # true 00:19:50.336 02:21:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.336 02:21:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.336 02:21:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.336 02:21:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.336 02:21:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.595 02:21:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.595 02:21:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.595 02:21:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:50.595 02:21:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:50.595 02:21:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:50.595 02:21:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:50.595 02:21:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:50.595 02:21:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:50.595 02:21:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:50.595 02:21:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.595 02:21:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.595 02:21:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:50.595 02:21:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:50.595 02:21:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.595 02:21:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.595 02:21:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.595 02:21:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.595 02:21:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.595 02:21:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:50.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:50.595 00:19:50.595 --- 10.0.0.2 ping statistics --- 00:19:50.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.595 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:50.595 02:21:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:50.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:50.595 00:19:50.595 --- 10.0.0.3 ping statistics --- 00:19:50.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.595 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:50.595 02:21:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:50.595 00:19:50.595 --- 10.0.0.1 ping statistics --- 00:19:50.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.595 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:50.595 02:21:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.595 02:21:50 -- nvmf/common.sh@421 -- # return 0 00:19:50.595 02:21:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.595 02:21:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.595 02:21:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.595 02:21:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.595 02:21:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.595 02:21:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.595 02:21:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.595 02:21:50 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:50.595 02:21:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.595 02:21:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.595 02:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.595 02:21:50 -- nvmf/common.sh@469 -- # nvmfpid=92775 00:19:50.595 02:21:50 -- nvmf/common.sh@470 -- # waitforlisten 92775 00:19:50.595 02:21:50 -- common/autotest_common.sh@819 -- # '[' -z 92775 ']' 00:19:50.595 02:21:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.595 02:21:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.595 02:21:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.595 02:21:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.595 02:21:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.595 02:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:50.595 [2024-07-15 02:21:50.130265] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:50.595 [2024-07-15 02:21:50.130371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.854 [2024-07-15 02:21:50.266822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.854 [2024-07-15 02:21:50.357857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.854 [2024-07-15 02:21:50.358028] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.854 [2024-07-15 02:21:50.358044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.854 [2024-07-15 02:21:50.358055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.854 [2024-07-15 02:21:50.358242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.854 [2024-07-15 02:21:50.358683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.854 [2024-07-15 02:21:50.358948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.854 [2024-07-15 02:21:50.358862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.790 02:21:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.790 02:21:51 -- common/autotest_common.sh@852 -- # return 0 00:19:51.790 02:21:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:51.790 02:21:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:51.790 02:21:51 -- common/autotest_common.sh@10 -- # set +x 00:19:51.790 02:21:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.790 02:21:51 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:51.790 02:21:51 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:52.048 02:21:51 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:52.048 02:21:51 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:52.307 02:21:51 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:19:52.307 02:21:51 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:52.875 02:21:52 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:52.875 02:21:52 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:19:52.875 02:21:52 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:52.875 02:21:52 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:52.875 02:21:52 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:52.875 [2024-07-15 02:21:52.382432] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.875 02:21:52 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.156 02:21:52 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.156 02:21:52 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:53.426 02:21:52 -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:53.426 02:21:52 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:53.684 02:21:53 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.942 [2024-07-15 02:21:53.319669] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.942 02:21:53 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:54.201 02:21:53 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:19:54.201 02:21:53 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:54.201 02:21:53 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:54.201 02:21:53 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:19:55.134 Initializing NVMe Controllers 00:19:55.134 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:19:55.134 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:19:55.134 Initialization complete. Launching workers. 00:19:55.134 ======================================================== 00:19:55.134 Latency(us) 00:19:55.134 Device Information : IOPS MiB/s Average min max 00:19:55.134 PCIE (0000:00:06.0) NSID 1 from core 0: 23463.36 91.65 1364.19 352.67 7983.29 00:19:55.134 ======================================================== 00:19:55.134 Total : 23463.36 91.65 1364.19 352.67 7983.29 00:19:55.134 00:19:55.134 02:21:54 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:56.509 Initializing NVMe Controllers 00:19:56.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:56.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:56.509 Initialization complete. Launching workers. 00:19:56.509 ======================================================== 00:19:56.509 Latency(us) 00:19:56.509 Device Information : IOPS MiB/s Average min max 00:19:56.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3788.94 14.80 263.60 103.77 4224.43 00:19:56.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8055.75 4997.68 12024.82 00:19:56.509 ======================================================== 00:19:56.509 Total : 3913.94 15.29 512.45 103.77 12024.82 00:19:56.509 00:19:56.509 02:21:56 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:57.885 [2024-07-15 02:21:57.298655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 [2024-07-15 02:21:57.298770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5fd0 is same with the state(5) to be set 00:19:57.885 Initializing NVMe Controllers 00:19:57.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:57.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:57.885 Initialization complete. Launching workers. 00:19:57.885 ======================================================== 00:19:57.885 Latency(us) 00:19:57.885 Device Information : IOPS MiB/s Average min max 00:19:57.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9174.08 35.84 3489.87 658.36 8359.39 00:19:57.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2690.35 10.51 12000.02 7201.34 22854.86 00:19:57.885 ======================================================== 00:19:57.885 Total : 11864.43 46.35 5419.61 658.36 22854.86 00:19:57.885 00:19:57.885 02:21:57 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:57.885 02:21:57 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:00.417 Initializing NVMe Controllers 00:20:00.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.417 Controller IO queue size 128, less than required. 00:20:00.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.417 Controller IO queue size 128, less than required. 00:20:00.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:00.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:00.417 Initialization complete. Launching workers. 00:20:00.417 ======================================================== 00:20:00.417 Latency(us) 00:20:00.417 Device Information : IOPS MiB/s Average min max 00:20:00.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1730.45 432.61 75368.74 47996.93 122375.13 00:20:00.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.98 151.00 218650.75 84051.45 293017.96 00:20:00.417 ======================================================== 00:20:00.417 Total : 2334.43 583.61 112439.78 47996.93 293017.96 00:20:00.417 00:20:00.417 02:21:59 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:00.675 No valid NVMe controllers or AIO or URING devices found 00:20:00.675 Initializing NVMe Controllers 00:20:00.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.675 Controller IO queue size 128, less than required. 00:20:00.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.675 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:00.675 Controller IO queue size 128, less than required. 00:20:00.675 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.675 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:00.675 WARNING: Some requested NVMe devices were skipped 00:20:00.675 02:22:00 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:03.206 Initializing NVMe Controllers 00:20:03.206 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.206 Controller IO queue size 128, less than required. 00:20:03.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.206 Controller IO queue size 128, less than required. 00:20:03.206 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:03.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:03.206 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:03.206 Initialization complete. Launching workers. 00:20:03.206 00:20:03.206 ==================== 00:20:03.206 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:03.206 TCP transport: 00:20:03.206 polls: 8244 00:20:03.206 idle_polls: 4660 00:20:03.206 sock_completions: 3584 00:20:03.206 nvme_completions: 3907 00:20:03.206 submitted_requests: 6014 00:20:03.206 queued_requests: 1 00:20:03.206 00:20:03.206 ==================== 00:20:03.206 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:03.206 TCP transport: 00:20:03.206 polls: 8476 00:20:03.206 idle_polls: 4991 00:20:03.206 sock_completions: 3485 00:20:03.206 nvme_completions: 6733 00:20:03.206 submitted_requests: 10303 00:20:03.206 queued_requests: 1 00:20:03.206 ======================================================== 00:20:03.206 Latency(us) 00:20:03.206 Device Information : IOPS MiB/s Average min max 00:20:03.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1040.44 260.11 126227.15 85779.93 210064.82 00:20:03.206 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1746.90 436.72 73470.01 38401.92 112280.85 00:20:03.206 ======================================================== 00:20:03.206 Total : 2787.34 696.83 93162.86 38401.92 210064.82 00:20:03.206 00:20:03.206 02:22:02 -- host/perf.sh@66 -- # sync 00:20:03.206 02:22:02 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.773 02:22:03 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:03.773 02:22:03 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:03.773 02:22:03 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:03.773 02:22:03 -- host/perf.sh@72 -- # ls_guid=c96c94d6-d8c8-481b-932a-43f7cdfd8f80 00:20:03.773 02:22:03 -- host/perf.sh@73 -- # get_lvs_free_mb c96c94d6-d8c8-481b-932a-43f7cdfd8f80 00:20:03.773 02:22:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c96c94d6-d8c8-481b-932a-43f7cdfd8f80 00:20:03.773 02:22:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:03.773 02:22:03 -- common/autotest_common.sh@1345 -- # local fc 00:20:03.773 02:22:03 -- common/autotest_common.sh@1346 -- # local cs 00:20:03.773 02:22:03 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:04.031 02:22:03 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:04.031 { 00:20:04.031 "base_bdev": "Nvme0n1", 00:20:04.031 "block_size": 4096, 00:20:04.031 "cluster_size": 4194304, 00:20:04.031 "free_clusters": 1278, 00:20:04.031 "name": "lvs_0", 00:20:04.031 "total_data_clusters": 1278, 00:20:04.031 "uuid": "c96c94d6-d8c8-481b-932a-43f7cdfd8f80" 00:20:04.031 } 00:20:04.031 ]' 00:20:04.031 02:22:03 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c96c94d6-d8c8-481b-932a-43f7cdfd8f80") .free_clusters' 00:20:04.290 02:22:03 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:04.290 02:22:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c96c94d6-d8c8-481b-932a-43f7cdfd8f80") .cluster_size' 00:20:04.290 5112 00:20:04.290 02:22:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:04.290 02:22:03 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:04.290 02:22:03 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:04.290 02:22:03 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:04.290 02:22:03 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c96c94d6-d8c8-481b-932a-43f7cdfd8f80 lbd_0 5112 00:20:04.548 02:22:03 -- host/perf.sh@80 -- # lb_guid=5fcf50f9-d91a-4bbc-a8f7-8157a2099804 00:20:04.548 02:22:03 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 5fcf50f9-d91a-4bbc-a8f7-8157a2099804 lvs_n_0 00:20:04.806 02:22:04 -- host/perf.sh@83 -- # ls_nested_guid=f6405b20-5882-4785-83ae-f2000c1316cc 00:20:04.806 02:22:04 -- host/perf.sh@84 -- # get_lvs_free_mb f6405b20-5882-4785-83ae-f2000c1316cc 00:20:04.806 02:22:04 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f6405b20-5882-4785-83ae-f2000c1316cc 00:20:04.806 02:22:04 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:04.806 02:22:04 -- common/autotest_common.sh@1345 -- # local fc 00:20:04.806 02:22:04 -- common/autotest_common.sh@1346 -- # local cs 00:20:04.806 02:22:04 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:05.065 02:22:04 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:05.065 { 00:20:05.065 "base_bdev": "Nvme0n1", 00:20:05.065 "block_size": 4096, 00:20:05.065 "cluster_size": 4194304, 00:20:05.065 "free_clusters": 0, 00:20:05.065 "name": "lvs_0", 00:20:05.065 "total_data_clusters": 1278, 00:20:05.065 "uuid": "c96c94d6-d8c8-481b-932a-43f7cdfd8f80" 00:20:05.065 }, 00:20:05.065 { 00:20:05.065 "base_bdev": "5fcf50f9-d91a-4bbc-a8f7-8157a2099804", 00:20:05.065 "block_size": 4096, 00:20:05.065 "cluster_size": 4194304, 00:20:05.065 "free_clusters": 1276, 00:20:05.065 "name": "lvs_n_0", 00:20:05.065 "total_data_clusters": 1276, 00:20:05.065 "uuid": "f6405b20-5882-4785-83ae-f2000c1316cc" 00:20:05.065 } 00:20:05.065 ]' 00:20:05.065 02:22:04 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f6405b20-5882-4785-83ae-f2000c1316cc") .free_clusters' 00:20:05.065 02:22:04 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:05.065 02:22:04 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f6405b20-5882-4785-83ae-f2000c1316cc") .cluster_size' 00:20:05.065 02:22:04 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:05.065 02:22:04 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:05.065 02:22:04 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:05.065 5104 00:20:05.065 02:22:04 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:05.065 02:22:04 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f6405b20-5882-4785-83ae-f2000c1316cc lbd_nest_0 5104 00:20:05.323 02:22:04 -- host/perf.sh@88 -- # lb_nested_guid=59cb559f-7af9-455c-8293-2e44f93df660 00:20:05.323 02:22:04 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.581 02:22:05 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:05.581 02:22:05 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 59cb559f-7af9-455c-8293-2e44f93df660 00:20:05.840 02:22:05 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.099 02:22:05 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:06.099 02:22:05 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:06.099 02:22:05 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:06.099 02:22:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:06.099 02:22:05 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.357 No valid NVMe controllers or AIO or URING devices found 00:20:06.357 Initializing NVMe Controllers 00:20:06.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:06.357 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:06.357 WARNING: Some requested NVMe devices were skipped 00:20:06.357 02:22:05 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:06.357 02:22:05 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:18.615 Initializing NVMe Controllers 00:20:18.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.615 Initialization complete. Launching workers. 00:20:18.615 ======================================================== 00:20:18.615 Latency(us) 00:20:18.615 Device Information : IOPS MiB/s Average min max 00:20:18.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.30 119.04 1050.14 351.20 7737.35 00:20:18.615 ======================================================== 00:20:18.615 Total : 952.30 119.04 1050.14 351.20 7737.35 00:20:18.615 00:20:18.615 02:22:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:18.615 02:22:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:18.615 02:22:16 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:18.615 No valid NVMe controllers or AIO or URING devices found 00:20:18.615 Initializing NVMe Controllers 00:20:18.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.615 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:18.615 WARNING: Some requested NVMe devices were skipped 00:20:18.615 02:22:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:18.615 02:22:16 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.585 Initializing NVMe Controllers 00:20:28.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.585 Initialization complete. Launching workers. 00:20:28.585 ======================================================== 00:20:28.585 Latency(us) 00:20:28.585 Device Information : IOPS MiB/s Average min max 00:20:28.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1146.70 143.34 27930.48 8068.86 244302.59 00:20:28.585 ======================================================== 00:20:28.585 Total : 1146.70 143.34 27930.48 8068.86 244302.59 00:20:28.585 00:20:28.585 02:22:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:28.585 02:22:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:28.585 02:22:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.585 No valid NVMe controllers or AIO or URING devices found 00:20:28.585 Initializing NVMe Controllers 00:20:28.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.585 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:28.585 WARNING: Some requested NVMe devices were skipped 00:20:28.585 02:22:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:28.585 02:22:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.586 Initializing NVMe Controllers 00:20:38.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:38.586 Controller IO queue size 128, less than required. 00:20:38.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:38.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:38.586 Initialization complete. Launching workers. 00:20:38.586 ======================================================== 00:20:38.586 Latency(us) 00:20:38.586 Device Information : IOPS MiB/s Average min max 00:20:38.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4236.19 529.52 30223.69 12561.03 66657.31 00:20:38.586 ======================================================== 00:20:38.586 Total : 4236.19 529.52 30223.69 12561.03 66657.31 00:20:38.586 00:20:38.586 02:22:37 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.586 02:22:37 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 59cb559f-7af9-455c-8293-2e44f93df660 00:20:38.586 02:22:37 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:38.586 02:22:38 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5fcf50f9-d91a-4bbc-a8f7-8157a2099804 00:20:38.844 02:22:38 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:39.101 02:22:38 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:39.101 02:22:38 -- host/perf.sh@114 -- # nvmftestfini 00:20:39.101 02:22:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.101 02:22:38 -- nvmf/common.sh@116 -- # sync 00:20:39.101 02:22:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.101 02:22:38 -- nvmf/common.sh@119 -- # set +e 00:20:39.101 02:22:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.101 02:22:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.102 rmmod nvme_tcp 00:20:39.102 rmmod nvme_fabrics 00:20:39.102 rmmod nvme_keyring 00:20:39.102 02:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.102 02:22:38 -- nvmf/common.sh@123 -- # set -e 00:20:39.102 02:22:38 -- nvmf/common.sh@124 -- # return 0 00:20:39.102 02:22:38 -- nvmf/common.sh@477 -- # '[' -n 92775 ']' 00:20:39.102 02:22:38 -- nvmf/common.sh@478 -- # killprocess 92775 00:20:39.102 02:22:38 -- common/autotest_common.sh@926 -- # '[' -z 92775 ']' 00:20:39.102 02:22:38 -- common/autotest_common.sh@930 -- # kill -0 92775 00:20:39.102 02:22:38 -- common/autotest_common.sh@931 -- # uname 00:20:39.102 02:22:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.102 02:22:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92775 00:20:39.102 02:22:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:39.102 02:22:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:39.102 killing process with pid 92775 00:20:39.102 02:22:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92775' 00:20:39.102 02:22:38 -- common/autotest_common.sh@945 -- # kill 92775 00:20:39.102 02:22:38 -- common/autotest_common.sh@950 -- # wait 92775 00:20:40.474 02:22:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.474 02:22:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.474 02:22:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.474 02:22:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.474 02:22:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.474 02:22:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.474 02:22:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.474 02:22:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.732 02:22:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:40.732 00:20:40.732 real 0m50.422s 00:20:40.732 user 3m10.286s 00:20:40.732 sys 0m10.767s 00:20:40.732 02:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.732 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:20:40.732 ************************************ 00:20:40.732 END TEST nvmf_perf 00:20:40.732 ************************************ 00:20:40.732 02:22:40 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:40.732 02:22:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:40.732 02:22:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:40.732 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:20:40.732 ************************************ 00:20:40.732 START TEST nvmf_fio_host 00:20:40.732 ************************************ 00:20:40.732 02:22:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:40.732 * Looking for test storage... 00:20:40.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.732 02:22:40 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.732 02:22:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.732 02:22:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.732 02:22:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.732 02:22:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.732 02:22:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.732 02:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.732 02:22:40 -- paths/export.sh@5 -- # export PATH 00:20:40.732 02:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.732 02:22:40 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.732 02:22:40 -- nvmf/common.sh@7 -- # uname -s 00:20:40.732 02:22:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.732 02:22:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.732 02:22:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.732 02:22:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.732 02:22:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.732 02:22:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.732 02:22:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.732 02:22:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.732 02:22:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.732 02:22:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.732 02:22:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:20:40.732 02:22:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:20:40.732 02:22:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.732 02:22:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.732 02:22:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.732 02:22:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.732 02:22:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.732 02:22:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.732 02:22:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.733 02:22:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.733 02:22:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.733 02:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.733 02:22:40 -- paths/export.sh@5 -- # export PATH 00:20:40.733 02:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.733 02:22:40 -- nvmf/common.sh@46 -- # : 0 00:20:40.733 02:22:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.733 02:22:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.733 02:22:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.733 02:22:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.733 02:22:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.733 02:22:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.733 02:22:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.733 02:22:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.733 02:22:40 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:40.733 02:22:40 -- host/fio.sh@14 -- # nvmftestinit 00:20:40.733 02:22:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.733 02:22:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.733 02:22:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.733 02:22:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.733 02:22:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.733 02:22:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.733 02:22:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.733 02:22:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.733 02:22:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:40.733 02:22:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:40.733 02:22:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:40.733 02:22:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:40.733 02:22:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:40.733 02:22:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:40.733 02:22:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.733 02:22:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.733 02:22:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.733 02:22:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:40.733 02:22:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.733 02:22:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.733 02:22:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.733 02:22:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.733 02:22:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.733 02:22:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.733 02:22:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.733 02:22:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.733 02:22:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:40.733 02:22:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:40.733 Cannot find device "nvmf_tgt_br" 00:20:40.733 02:22:40 -- nvmf/common.sh@154 -- # true 00:20:40.733 02:22:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.733 Cannot find device "nvmf_tgt_br2" 00:20:40.733 02:22:40 -- nvmf/common.sh@155 -- # true 00:20:40.733 02:22:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:40.733 02:22:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:40.733 Cannot find device "nvmf_tgt_br" 00:20:40.733 02:22:40 -- nvmf/common.sh@157 -- # true 00:20:40.733 02:22:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.733 Cannot find device "nvmf_tgt_br2" 00:20:40.733 02:22:40 -- nvmf/common.sh@158 -- # true 00:20:40.733 02:22:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.991 02:22:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.991 02:22:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.991 02:22:40 -- nvmf/common.sh@161 -- # true 00:20:40.991 02:22:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.991 02:22:40 -- nvmf/common.sh@162 -- # true 00:20:40.991 02:22:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.991 02:22:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.991 02:22:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.991 02:22:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.991 02:22:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.991 02:22:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.991 02:22:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.991 02:22:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.991 02:22:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.991 02:22:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.991 02:22:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.991 02:22:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.991 02:22:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.991 02:22:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.991 02:22:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.991 02:22:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.991 02:22:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.991 02:22:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.991 02:22:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.991 02:22:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.991 02:22:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.991 02:22:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.991 02:22:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.991 02:22:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:40.991 00:20:40.991 --- 10.0.0.2 ping statistics --- 00:20:40.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.991 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:40.991 02:22:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:40.991 00:20:40.991 --- 10.0.0.3 ping statistics --- 00:20:40.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.991 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:40.991 02:22:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:40.991 00:20:40.991 --- 10.0.0.1 ping statistics --- 00:20:40.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.991 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:40.991 02:22:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.991 02:22:40 -- nvmf/common.sh@421 -- # return 0 00:20:40.991 02:22:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.991 02:22:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.991 02:22:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.991 02:22:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.991 02:22:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.991 02:22:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.991 02:22:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:41.249 02:22:40 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:41.249 02:22:40 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:41.249 02:22:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:41.249 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:20:41.249 02:22:40 -- host/fio.sh@24 -- # nvmfpid=93736 00:20:41.249 02:22:40 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.249 02:22:40 -- host/fio.sh@28 -- # waitforlisten 93736 00:20:41.249 02:22:40 -- common/autotest_common.sh@819 -- # '[' -z 93736 ']' 00:20:41.249 02:22:40 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.249 02:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.249 02:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:41.249 02:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.249 02:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:41.249 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:20:41.249 [2024-07-15 02:22:40.614163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:41.249 [2024-07-15 02:22:40.614239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.249 [2024-07-15 02:22:40.747083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.507 [2024-07-15 02:22:40.826275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.507 [2024-07-15 02:22:40.826469] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.507 [2024-07-15 02:22:40.826489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.507 [2024-07-15 02:22:40.826501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.507 [2024-07-15 02:22:40.826695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.507 [2024-07-15 02:22:40.826808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.507 [2024-07-15 02:22:40.827110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.507 [2024-07-15 02:22:40.827126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.073 02:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.073 02:22:41 -- common/autotest_common.sh@852 -- # return 0 00:20:42.073 02:22:41 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:42.330 [2024-07-15 02:22:41.809969] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.330 02:22:41 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:42.330 02:22:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:42.330 02:22:41 -- common/autotest_common.sh@10 -- # set +x 00:20:42.330 02:22:41 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:42.588 Malloc1 00:20:42.588 02:22:42 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.846 02:22:42 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.103 02:22:42 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.360 [2024-07-15 02:22:42.818064] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.360 02:22:42 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.617 02:22:43 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:43.617 02:22:43 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:43.617 02:22:43 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:43.617 02:22:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:43.617 02:22:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.617 02:22:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:43.617 02:22:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:43.617 02:22:43 -- common/autotest_common.sh@1320 -- # shift 00:20:43.617 02:22:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:43.617 02:22:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:43.617 02:22:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:43.617 02:22:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:43.617 02:22:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:43.617 02:22:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:43.617 02:22:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:43.617 02:22:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:43.874 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:43.874 fio-3.35 00:20:43.874 Starting 1 thread 00:20:46.399 00:20:46.399 test: (groupid=0, jobs=1): err= 0: pid=93867: Mon Jul 15 02:22:45 2024 00:20:46.399 read: IOPS=9955, BW=38.9MiB/s (40.8MB/s)(78.0MiB/2006msec) 00:20:46.399 slat (nsec): min=1867, max=339866, avg=2493.03, stdev=3433.56 00:20:46.399 clat (usec): min=3223, max=12195, avg=6812.55, stdev=595.08 00:20:46.399 lat (usec): min=3258, max=12212, avg=6815.04, stdev=595.03 00:20:46.399 clat percentiles (usec): 00:20:46.399 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:20:46.399 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6915], 00:20:46.399 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7767], 00:20:46.399 | 99.00th=[ 8225], 99.50th=[ 8848], 99.90th=[10814], 99.95th=[11731], 00:20:46.399 | 99.99th=[12125] 00:20:46.399 bw ( KiB/s): min=39392, max=40304, per=99.96%, avg=39804.00, stdev=375.83, samples=4 00:20:46.399 iops : min= 9848, max=10076, avg=9951.00, stdev=93.96, samples=4 00:20:46.399 write: IOPS=9972, BW=39.0MiB/s (40.8MB/s)(78.1MiB/2006msec); 0 zone resets 00:20:46.399 slat (nsec): min=1911, max=247353, avg=2554.02, stdev=2320.84 00:20:46.399 clat (usec): min=2465, max=12718, avg=5990.90, stdev=527.64 00:20:46.399 lat (usec): min=2479, max=12720, avg=5993.45, stdev=527.62 00:20:46.399 clat percentiles (usec): 00:20:46.399 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:20:46.399 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6128], 00:20:46.399 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6587], 95.00th=[ 6718], 00:20:46.399 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[10290], 99.95th=[11731], 00:20:46.399 | 99.99th=[12125] 00:20:46.399 bw ( KiB/s): min=39680, max=40248, per=100.00%, avg=39890.00, stdev=268.32, samples=4 00:20:46.399 iops : min= 9920, max=10062, avg=9972.50, stdev=67.08, samples=4 00:20:46.399 lat (msec) : 4=0.12%, 10=99.75%, 20=0.14% 00:20:46.399 cpu : usr=65.34%, sys=25.19%, ctx=12, majf=0, minf=5 00:20:46.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:46.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:46.399 issued rwts: total=19970,20004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:46.399 00:20:46.399 Run status group 0 (all jobs): 00:20:46.399 READ: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=78.0MiB (81.8MB), run=2006-2006msec 00:20:46.399 WRITE: bw=39.0MiB/s (40.8MB/s), 39.0MiB/s-39.0MiB/s (40.8MB/s-40.8MB/s), io=78.1MiB (81.9MB), run=2006-2006msec 00:20:46.399 02:22:45 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:46.399 02:22:45 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:46.399 02:22:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:46.399 02:22:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:46.399 02:22:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:46.399 02:22:45 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:46.399 02:22:45 -- common/autotest_common.sh@1320 -- # shift 00:20:46.399 02:22:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:46.399 02:22:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:46.399 02:22:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:46.399 02:22:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:46.399 02:22:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:46.400 02:22:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:46.400 02:22:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:46.400 02:22:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:46.400 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:46.400 fio-3.35 00:20:46.400 Starting 1 thread 00:20:48.995 00:20:48.995 test: (groupid=0, jobs=1): err= 0: pid=93916: Mon Jul 15 02:22:47 2024 00:20:48.995 read: IOPS=8787, BW=137MiB/s (144MB/s)(275MiB/2006msec) 00:20:48.995 slat (usec): min=2, max=126, avg= 3.64, stdev= 2.47 00:20:48.995 clat (usec): min=1485, max=17375, avg=8580.90, stdev=2190.26 00:20:48.995 lat (usec): min=1488, max=17378, avg=8584.54, stdev=2190.34 00:20:48.995 clat percentiles (usec): 00:20:48.995 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:20:48.995 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9110], 00:20:48.995 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11207], 95.00th=[11994], 00:20:48.995 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16712], 99.95th=[16909], 00:20:48.995 | 99.99th=[17433] 00:20:48.995 bw ( KiB/s): min=60416, max=83232, per=51.74%, avg=72744.00, stdev=10620.48, samples=4 00:20:48.995 iops : min= 3776, max= 5202, avg=4546.50, stdev=663.78, samples=4 00:20:48.995 write: IOPS=5357, BW=83.7MiB/s (87.8MB/s)(148MiB/1767msec); 0 zone resets 00:20:48.995 slat (usec): min=31, max=362, avg=36.14, stdev= 8.91 00:20:48.995 clat (usec): min=2203, max=17556, avg=10320.69, stdev=1809.07 00:20:48.995 lat (usec): min=2235, max=17590, avg=10356.83, stdev=1809.23 00:20:48.995 clat percentiles (usec): 00:20:48.995 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 8848], 00:20:48.995 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:20:48.995 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12780], 95.00th=[13566], 00:20:48.995 | 99.00th=[15270], 99.50th=[15795], 99.90th=[17171], 99.95th=[17433], 00:20:48.995 | 99.99th=[17433] 00:20:48.995 bw ( KiB/s): min=64128, max=86528, per=88.35%, avg=75736.00, stdev=10498.00, samples=4 00:20:48.995 iops : min= 4008, max= 5408, avg=4733.50, stdev=656.12, samples=4 00:20:48.995 lat (msec) : 2=0.03%, 4=0.32%, 10=62.56%, 20=37.09% 00:20:48.995 cpu : usr=70.82%, sys=19.30%, ctx=5, majf=0, minf=1 00:20:48.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:20:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.995 issued rwts: total=17628,9467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.995 00:20:48.995 Run status group 0 (all jobs): 00:20:48.995 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (289MB), run=2006-2006msec 00:20:48.995 WRITE: bw=83.7MiB/s (87.8MB/s), 83.7MiB/s-83.7MiB/s (87.8MB/s-87.8MB/s), io=148MiB (155MB), run=1767-1767msec 00:20:48.995 02:22:47 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.995 02:22:48 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:20:48.995 02:22:48 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:20:48.995 02:22:48 -- host/fio.sh@51 -- # get_nvme_bdfs 00:20:48.995 02:22:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:48.996 02:22:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:48.996 02:22:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:48.996 02:22:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:48.996 02:22:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:48.996 02:22:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:48.996 02:22:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:48.996 02:22:48 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:20:48.996 Nvme0n1 00:20:48.996 02:22:48 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:49.560 02:22:48 -- host/fio.sh@53 -- # ls_guid=b23aa116-4535-480d-93d2-9ee3ed0ee347 00:20:49.560 02:22:48 -- host/fio.sh@54 -- # get_lvs_free_mb b23aa116-4535-480d-93d2-9ee3ed0ee347 00:20:49.560 02:22:48 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b23aa116-4535-480d-93d2-9ee3ed0ee347 00:20:49.560 02:22:48 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:49.560 02:22:48 -- common/autotest_common.sh@1345 -- # local fc 00:20:49.560 02:22:48 -- common/autotest_common.sh@1346 -- # local cs 00:20:49.560 02:22:48 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:49.560 02:22:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:49.560 { 00:20:49.560 "base_bdev": "Nvme0n1", 00:20:49.560 "block_size": 4096, 00:20:49.560 "cluster_size": 1073741824, 00:20:49.560 "free_clusters": 4, 00:20:49.560 "name": "lvs_0", 00:20:49.560 "total_data_clusters": 4, 00:20:49.560 "uuid": "b23aa116-4535-480d-93d2-9ee3ed0ee347" 00:20:49.560 } 00:20:49.560 ]' 00:20:49.560 02:22:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b23aa116-4535-480d-93d2-9ee3ed0ee347") .free_clusters' 00:20:49.560 02:22:49 -- common/autotest_common.sh@1348 -- # fc=4 00:20:49.560 02:22:49 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b23aa116-4535-480d-93d2-9ee3ed0ee347") .cluster_size' 00:20:49.560 02:22:49 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:20:49.560 02:22:49 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:20:49.560 4096 00:20:49.560 02:22:49 -- common/autotest_common.sh@1353 -- # echo 4096 00:20:49.560 02:22:49 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:49.817 b34edbc2-636c-4c5f-a262-491ee4864370 00:20:49.817 02:22:49 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:50.075 02:22:49 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:50.332 02:22:49 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:50.588 02:22:49 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.588 02:22:49 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.588 02:22:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:50.588 02:22:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:50.588 02:22:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:50.588 02:22:49 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:50.588 02:22:49 -- common/autotest_common.sh@1320 -- # shift 00:20:50.588 02:22:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:50.588 02:22:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.588 02:22:49 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:50.588 02:22:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:50.588 02:22:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:50.588 02:22:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:50.588 02:22:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:50.588 02:22:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:50.588 02:22:50 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:50.588 02:22:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:50.588 02:22:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:50.588 02:22:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:50.588 02:22:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:50.588 02:22:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:50.588 02:22:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:50.588 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:50.588 fio-3.35 00:20:50.588 Starting 1 thread 00:20:53.114 00:20:53.114 test: (groupid=0, jobs=1): err= 0: pid=94066: Mon Jul 15 02:22:52 2024 00:20:53.114 read: IOPS=6745, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec) 00:20:53.114 slat (nsec): min=1918, max=342997, avg=2690.78, stdev=3957.34 00:20:53.114 clat (usec): min=4045, max=18069, avg=10105.24, stdev=940.72 00:20:53.114 lat (usec): min=4054, max=18071, avg=10107.93, stdev=940.55 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:20:53.114 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:20:53.114 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:20:53.114 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14484], 99.95th=[16057], 00:20:53.114 | 99.99th=[17957] 00:20:53.114 bw ( KiB/s): min=25960, max=27376, per=99.91%, avg=26956.00, stdev=675.56, samples=4 00:20:53.114 iops : min= 6490, max= 6844, avg=6739.00, stdev=168.89, samples=4 00:20:53.114 write: IOPS=6744, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec); 0 zone resets 00:20:53.114 slat (usec): min=2, max=257, avg= 2.78, stdev= 2.81 00:20:53.114 clat (usec): min=2403, max=17002, avg=8815.15, stdev=834.74 00:20:53.114 lat (usec): min=2417, max=17005, avg=8817.92, stdev=834.66 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8160], 00:20:53.114 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:20:53.114 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:20:53.114 | 99.00th=[10683], 99.50th=[10945], 99.90th=[15008], 99.95th=[16188], 00:20:53.114 | 99.99th=[16909] 00:20:53.114 bw ( KiB/s): min=26880, max=27072, per=99.95%, avg=26962.00, stdev=93.84, samples=4 00:20:53.114 iops : min= 6720, max= 6768, avg=6740.50, stdev=23.46, samples=4 00:20:53.114 lat (msec) : 4=0.03%, 10=70.41%, 20=29.55% 00:20:53.114 cpu : usr=71.20%, sys=21.28%, ctx=6, majf=0, minf=5 00:20:53.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:53.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:53.114 issued rwts: total=13544,13542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:53.114 00:20:53.114 Run status group 0 (all jobs): 00:20:53.114 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.5MB), run=2008-2008msec 00:20:53.114 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.5MB), run=2008-2008msec 00:20:53.114 02:22:52 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:53.372 02:22:52 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:20:53.630 02:22:52 -- host/fio.sh@64 -- # ls_nested_guid=dac0fa80-7dcb-4b8b-8f82-175d44a50652 00:20:53.630 02:22:52 -- host/fio.sh@65 -- # get_lvs_free_mb dac0fa80-7dcb-4b8b-8f82-175d44a50652 00:20:53.630 02:22:52 -- common/autotest_common.sh@1343 -- # local lvs_uuid=dac0fa80-7dcb-4b8b-8f82-175d44a50652 00:20:53.630 02:22:52 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:53.630 02:22:52 -- common/autotest_common.sh@1345 -- # local fc 00:20:53.630 02:22:52 -- common/autotest_common.sh@1346 -- # local cs 00:20:53.630 02:22:52 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:53.887 02:22:53 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:53.887 { 00:20:53.887 "base_bdev": "Nvme0n1", 00:20:53.887 "block_size": 4096, 00:20:53.887 "cluster_size": 1073741824, 00:20:53.887 "free_clusters": 0, 00:20:53.887 "name": "lvs_0", 00:20:53.887 "total_data_clusters": 4, 00:20:53.887 "uuid": "b23aa116-4535-480d-93d2-9ee3ed0ee347" 00:20:53.887 }, 00:20:53.887 { 00:20:53.887 "base_bdev": "b34edbc2-636c-4c5f-a262-491ee4864370", 00:20:53.887 "block_size": 4096, 00:20:53.887 "cluster_size": 4194304, 00:20:53.887 "free_clusters": 1022, 00:20:53.887 "name": "lvs_n_0", 00:20:53.887 "total_data_clusters": 1022, 00:20:53.887 "uuid": "dac0fa80-7dcb-4b8b-8f82-175d44a50652" 00:20:53.887 } 00:20:53.887 ]' 00:20:53.887 02:22:53 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="dac0fa80-7dcb-4b8b-8f82-175d44a50652") .free_clusters' 00:20:53.887 02:22:53 -- common/autotest_common.sh@1348 -- # fc=1022 00:20:53.887 02:22:53 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="dac0fa80-7dcb-4b8b-8f82-175d44a50652") .cluster_size' 00:20:53.887 4088 00:20:53.887 02:22:53 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:53.887 02:22:53 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:20:53.887 02:22:53 -- common/autotest_common.sh@1353 -- # echo 4088 00:20:53.887 02:22:53 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:20:54.144 4427ed80-ea01-4180-811e-d39e61519959 00:20:54.144 02:22:53 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:20:54.401 02:22:53 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:20:54.657 02:22:54 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:54.914 02:22:54 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.914 02:22:54 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.914 02:22:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:54.914 02:22:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.914 02:22:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:54.914 02:22:54 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.914 02:22:54 -- common/autotest_common.sh@1320 -- # shift 00:20:54.914 02:22:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:54.914 02:22:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:54.914 02:22:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:54.914 02:22:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:54.914 02:22:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:54.914 02:22:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:54.914 02:22:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:54.914 02:22:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.914 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:54.914 fio-3.35 00:20:54.914 Starting 1 thread 00:20:57.438 00:20:57.438 test: (groupid=0, jobs=1): err= 0: pid=94183: Mon Jul 15 02:22:56 2024 00:20:57.438 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(47.8MiB/2009msec) 00:20:57.438 slat (nsec): min=1930, max=369682, avg=2742.00, stdev=4413.40 00:20:57.438 clat (usec): min=4616, max=19063, avg=11216.07, stdev=1065.15 00:20:57.438 lat (usec): min=4625, max=19066, avg=11218.81, stdev=1064.93 00:20:57.438 clat percentiles (usec): 00:20:57.438 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:20:57.438 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:20:57.438 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:20:57.438 | 99.00th=[13698], 99.50th=[14091], 99.90th=[16712], 99.95th=[17957], 00:20:57.438 | 99.99th=[19006] 00:20:57.438 bw ( KiB/s): min=23280, max=24904, per=99.89%, avg=24354.00, stdev=729.49, samples=4 00:20:57.438 iops : min= 5820, max= 6226, avg=6088.50, stdev=182.37, samples=4 00:20:57.438 write: IOPS=6073, BW=23.7MiB/s (24.9MB/s)(47.7MiB/2009msec); 0 zone resets 00:20:57.438 slat (nsec): min=1999, max=235792, avg=2866.68, stdev=2829.43 00:20:57.438 clat (usec): min=2388, max=18768, avg=9727.30, stdev=920.38 00:20:57.438 lat (usec): min=2400, max=18771, avg=9730.16, stdev=920.22 00:20:57.438 clat percentiles (usec): 00:20:57.438 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:20:57.438 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:20:57.439 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:20:57.439 | 99.00th=[11731], 99.50th=[11994], 99.90th=[15401], 99.95th=[17695], 00:20:57.439 | 99.99th=[18744] 00:20:57.439 bw ( KiB/s): min=24152, max=24384, per=99.96%, avg=24284.00, stdev=114.73, samples=4 00:20:57.439 iops : min= 6038, max= 6096, avg=6071.00, stdev=28.68, samples=4 00:20:57.439 lat (msec) : 4=0.04%, 10=36.70%, 20=63.25% 00:20:57.439 cpu : usr=71.56%, sys=22.01%, ctx=8, majf=0, minf=5 00:20:57.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:20:57.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.439 issued rwts: total=12245,12201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.439 00:20:57.439 Run status group 0 (all jobs): 00:20:57.439 READ: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.8MiB (50.2MB), run=2009-2009msec 00:20:57.439 WRITE: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2009-2009msec 00:20:57.439 02:22:56 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:57.697 02:22:57 -- host/fio.sh@74 -- # sync 00:20:57.697 02:22:57 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:20:57.955 02:22:57 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:58.214 02:22:57 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:20:58.214 02:22:57 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:58.482 02:22:57 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:20:59.471 02:22:58 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:59.471 02:22:58 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:59.471 02:22:58 -- host/fio.sh@86 -- # nvmftestfini 00:20:59.471 02:22:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.471 02:22:58 -- nvmf/common.sh@116 -- # sync 00:20:59.471 02:22:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.471 02:22:58 -- nvmf/common.sh@119 -- # set +e 00:20:59.471 02:22:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.471 02:22:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.471 rmmod nvme_tcp 00:20:59.471 rmmod nvme_fabrics 00:20:59.471 rmmod nvme_keyring 00:20:59.471 02:22:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.471 02:22:58 -- nvmf/common.sh@123 -- # set -e 00:20:59.471 02:22:58 -- nvmf/common.sh@124 -- # return 0 00:20:59.471 02:22:58 -- nvmf/common.sh@477 -- # '[' -n 93736 ']' 00:20:59.471 02:22:58 -- nvmf/common.sh@478 -- # killprocess 93736 00:20:59.471 02:22:58 -- common/autotest_common.sh@926 -- # '[' -z 93736 ']' 00:20:59.471 02:22:58 -- common/autotest_common.sh@930 -- # kill -0 93736 00:20:59.471 02:22:58 -- common/autotest_common.sh@931 -- # uname 00:20:59.471 02:22:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.471 02:22:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93736 00:20:59.471 killing process with pid 93736 00:20:59.471 02:22:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:59.471 02:22:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:59.471 02:22:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93736' 00:20:59.471 02:22:58 -- common/autotest_common.sh@945 -- # kill 93736 00:20:59.471 02:22:58 -- common/autotest_common.sh@950 -- # wait 93736 00:20:59.730 02:22:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:59.730 02:22:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:59.730 02:22:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:59.730 02:22:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.730 02:22:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:59.730 02:22:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.730 02:22:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.730 02:22:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.730 02:22:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:59.730 00:20:59.730 real 0m19.076s 00:20:59.730 user 1m23.556s 00:20:59.730 sys 0m4.462s 00:20:59.730 02:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.730 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 ************************************ 00:20:59.730 END TEST nvmf_fio_host 00:20:59.730 ************************************ 00:20:59.730 02:22:59 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.730 02:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:59.730 02:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.730 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:20:59.730 ************************************ 00:20:59.730 START TEST nvmf_failover 00:20:59.730 ************************************ 00:20:59.730 02:22:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:59.989 * Looking for test storage... 00:20:59.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:59.989 02:22:59 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.989 02:22:59 -- nvmf/common.sh@7 -- # uname -s 00:20:59.989 02:22:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.989 02:22:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.989 02:22:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.989 02:22:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.989 02:22:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.989 02:22:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.989 02:22:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.989 02:22:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.989 02:22:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.989 02:22:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:20:59.989 02:22:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:20:59.989 02:22:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.989 02:22:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.989 02:22:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:59.989 02:22:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.989 02:22:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.989 02:22:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.989 02:22:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.989 02:22:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.989 02:22:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.989 02:22:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.989 02:22:59 -- paths/export.sh@5 -- # export PATH 00:20:59.989 02:22:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.989 02:22:59 -- nvmf/common.sh@46 -- # : 0 00:20:59.989 02:22:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:59.989 02:22:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:59.989 02:22:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:59.989 02:22:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.989 02:22:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.989 02:22:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:59.989 02:22:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:59.989 02:22:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:59.989 02:22:59 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:59.989 02:22:59 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:59.989 02:22:59 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.989 02:22:59 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.989 02:22:59 -- host/failover.sh@18 -- # nvmftestinit 00:20:59.989 02:22:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:59.989 02:22:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.989 02:22:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:59.989 02:22:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:59.989 02:22:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:59.989 02:22:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.989 02:22:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.989 02:22:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.989 02:22:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:59.989 02:22:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:59.989 02:22:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.989 02:22:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.989 02:22:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:59.989 02:22:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:59.989 02:22:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:59.989 02:22:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:59.989 02:22:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:59.989 02:22:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.989 02:22:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:59.989 02:22:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:59.989 02:22:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:59.989 02:22:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:59.989 02:22:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:59.989 02:22:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:59.989 Cannot find device "nvmf_tgt_br" 00:20:59.989 02:22:59 -- nvmf/common.sh@154 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.989 Cannot find device "nvmf_tgt_br2" 00:20:59.989 02:22:59 -- nvmf/common.sh@155 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:59.989 02:22:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:59.989 Cannot find device "nvmf_tgt_br" 00:20:59.989 02:22:59 -- nvmf/common.sh@157 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:59.989 Cannot find device "nvmf_tgt_br2" 00:20:59.989 02:22:59 -- nvmf/common.sh@158 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:59.989 02:22:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:59.989 02:22:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:59.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.989 02:22:59 -- nvmf/common.sh@161 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:59.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:59.989 02:22:59 -- nvmf/common.sh@162 -- # true 00:20:59.989 02:22:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:59.989 02:22:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:59.989 02:22:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:59.989 02:22:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:59.989 02:22:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:59.989 02:22:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:00.248 02:22:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:00.248 02:22:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:00.248 02:22:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:00.248 02:22:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:00.248 02:22:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:00.248 02:22:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:00.248 02:22:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:00.248 02:22:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:00.248 02:22:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:00.248 02:22:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:00.248 02:22:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:00.248 02:22:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:00.248 02:22:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:00.248 02:22:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:00.248 02:22:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:00.248 02:22:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:00.249 02:22:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:00.249 02:22:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:00.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:21:00.249 00:21:00.249 --- 10.0.0.2 ping statistics --- 00:21:00.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.249 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:00.249 02:22:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:00.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:00.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:21:00.249 00:21:00.249 --- 10.0.0.3 ping statistics --- 00:21:00.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.249 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:21:00.249 02:22:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:00.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:00.249 00:21:00.249 --- 10.0.0.1 ping statistics --- 00:21:00.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.249 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:00.249 02:22:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.249 02:22:59 -- nvmf/common.sh@421 -- # return 0 00:21:00.249 02:22:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:00.249 02:22:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.249 02:22:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:00.249 02:22:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:00.249 02:22:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.249 02:22:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:00.249 02:22:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:00.249 02:22:59 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:00.249 02:22:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:00.249 02:22:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:00.249 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:21:00.249 02:22:59 -- nvmf/common.sh@469 -- # nvmfpid=94461 00:21:00.249 02:22:59 -- nvmf/common.sh@470 -- # waitforlisten 94461 00:21:00.249 02:22:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:00.249 02:22:59 -- common/autotest_common.sh@819 -- # '[' -z 94461 ']' 00:21:00.249 02:22:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.249 02:22:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.249 02:22:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.249 02:22:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.249 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:21:00.249 [2024-07-15 02:22:59.744157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:00.249 [2024-07-15 02:22:59.744284] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.507 [2024-07-15 02:22:59.884895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:00.507 [2024-07-15 02:22:59.971803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:00.507 [2024-07-15 02:22:59.971956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.507 [2024-07-15 02:22:59.971968] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.507 [2024-07-15 02:22:59.971976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.507 [2024-07-15 02:22:59.972109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.507 [2024-07-15 02:22:59.972991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.507 [2024-07-15 02:22:59.973049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.441 02:23:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.441 02:23:00 -- common/autotest_common.sh@852 -- # return 0 00:21:01.441 02:23:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:01.441 02:23:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:01.441 02:23:00 -- common/autotest_common.sh@10 -- # set +x 00:21:01.441 02:23:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.441 02:23:00 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:01.441 [2024-07-15 02:23:00.919766] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.441 02:23:00 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:01.699 Malloc0 00:21:01.699 02:23:01 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.958 02:23:01 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.217 02:23:01 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.475 [2024-07-15 02:23:01.937501] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.475 02:23:01 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:02.732 [2024-07-15 02:23:02.189725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:02.732 02:23:02 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:02.990 [2024-07-15 02:23:02.401910] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:02.990 02:23:02 -- host/failover.sh@31 -- # bdevperf_pid=94574 00:21:02.990 02:23:02 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:02.990 02:23:02 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.990 02:23:02 -- host/failover.sh@34 -- # waitforlisten 94574 /var/tmp/bdevperf.sock 00:21:02.990 02:23:02 -- common/autotest_common.sh@819 -- # '[' -z 94574 ']' 00:21:02.990 02:23:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.990 02:23:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.990 02:23:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.990 02:23:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.990 02:23:02 -- common/autotest_common.sh@10 -- # set +x 00:21:03.922 02:23:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.922 02:23:03 -- common/autotest_common.sh@852 -- # return 0 00:21:03.922 02:23:03 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.179 NVMe0n1 00:21:04.435 02:23:03 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:04.692 00:21:04.692 02:23:04 -- host/failover.sh@39 -- # run_test_pid=94616 00:21:04.692 02:23:04 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.692 02:23:04 -- host/failover.sh@41 -- # sleep 1 00:21:05.624 02:23:05 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.882 [2024-07-15 02:23:05.277121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.277780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.277888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.277954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.278987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.279970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.882 [2024-07-15 02:23:05.280701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.280761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.280819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.280876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.280934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.280993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.281973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.282035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 [2024-07-15 02:23:05.282094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762850 is same with the state(5) to be set 00:21:05.883 02:23:05 -- host/failover.sh@45 -- # sleep 3 00:21:09.159 02:23:08 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.159 00:21:09.159 02:23:08 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.417 [2024-07-15 02:23:08.896824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.417 [2024-07-15 02:23:08.896981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 [2024-07-15 02:23:08.897090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x763f90 is same with the state(5) to be set 00:21:09.418 02:23:08 -- host/failover.sh@50 -- # sleep 3 00:21:12.700 02:23:11 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.700 [2024-07-15 02:23:12.176298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.700 02:23:12 -- host/failover.sh@55 -- # sleep 1 00:21:14.074 02:23:13 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:14.074 [2024-07-15 02:23:13.420809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.421293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.421396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.421462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.422988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.423047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.423105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.423164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.074 [2024-07-15 02:23:13.423222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.423936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.424987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.425981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.426993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 [2024-07-15 02:23:13.427782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x765010 is same with the state(5) to be set 00:21:14.075 02:23:13 -- host/failover.sh@59 -- # wait 94616 00:21:20.640 0 00:21:20.640 02:23:19 -- host/failover.sh@61 -- # killprocess 94574 00:21:20.640 02:23:19 -- common/autotest_common.sh@926 -- # '[' -z 94574 ']' 00:21:20.640 02:23:19 -- common/autotest_common.sh@930 -- # kill -0 94574 00:21:20.640 02:23:19 -- common/autotest_common.sh@931 -- # uname 00:21:20.640 02:23:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:20.640 02:23:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94574 00:21:20.640 02:23:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:20.640 02:23:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:20.640 02:23:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94574' 00:21:20.640 killing process with pid 94574 00:21:20.640 02:23:19 -- common/autotest_common.sh@945 -- # kill 94574 00:21:20.640 02:23:19 -- common/autotest_common.sh@950 -- # wait 94574 00:21:20.640 02:23:19 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:20.641 [2024-07-15 02:23:02.462076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:20.641 [2024-07-15 02:23:02.462749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94574 ] 00:21:20.641 [2024-07-15 02:23:02.600183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.641 [2024-07-15 02:23:02.692760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.641 Running I/O for 15 seconds... 00:21:20.641 [2024-07-15 02:23:05.282697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.282979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.282996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.641 [2024-07-15 02:23:05.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.641 [2024-07-15 02:23:05.283939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.641 [2024-07-15 02:23:05.283956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.283971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.283987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.284901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.284963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.284979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.285009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.285195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.642 [2024-07-15 02:23:05.285322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.642 [2024-07-15 02:23:05.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.642 [2024-07-15 02:23:05.285368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.285871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.285990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.643 [2024-07-15 02:23:05.286704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.643 [2024-07-15 02:23:05.286796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.643 [2024-07-15 02:23:05.286812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:05.286831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.286855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:05.286870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.286886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:05.286900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.286917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:05.286931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.286948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17810 is same with the state(5) to be set 00:21:20.644 [2024-07-15 02:23:05.286965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.644 [2024-07-15 02:23:05.286992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.644 [2024-07-15 02:23:05.287003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:8 PRP1 0x0 PRP2 0x0 00:21:20.644 [2024-07-15 02:23:05.287017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.287077] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f17810 was disconnected and freed. reset controller. 00:21:20.644 [2024-07-15 02:23:05.287095] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:20.644 [2024-07-15 02:23:05.287149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.644 [2024-07-15 02:23:05.287171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.287186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.644 [2024-07-15 02:23:05.287200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.287214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.644 [2024-07-15 02:23:05.287228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.287242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.644 [2024-07-15 02:23:05.287256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:05.287270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.644 [2024-07-15 02:23:05.289829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.644 [2024-07-15 02:23:05.289868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eedea0 (9): Bad file descriptor 00:21:20.644 [2024-07-15 02:23:05.325117] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.644 [2024-07-15 02:23:08.897187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.897976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.897992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.644 [2024-07-15 02:23:08.898137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.644 [2024-07-15 02:23:08.898200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.644 [2024-07-15 02:23:08.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.644 [2024-07-15 02:23:08.898248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.898368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.898955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.898977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.645 [2024-07-15 02:23:08.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.645 [2024-07-15 02:23:08.899333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.645 [2024-07-15 02:23:08.899347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.899415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.899474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.899533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.899570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.899721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.899954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.899971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.646 [2024-07-15 02:23:08.900746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.646 [2024-07-15 02:23:08.900763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.646 [2024-07-15 02:23:08.900777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.900808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.900839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.900870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.900901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.900932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.900963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.900979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.900993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.901037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.901073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.901131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.647 [2024-07-15 02:23:08.901195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:08.901405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1a00 is same with the state(5) to be set 00:21:20.647 [2024-07-15 02:23:08.901443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.647 [2024-07-15 02:23:08.901454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.647 [2024-07-15 02:23:08.901466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8592 len:8 PRP1 0x0 PRP2 0x0 00:21:20.647 [2024-07-15 02:23:08.901480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901537] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c1a00 was disconnected and freed. reset controller. 00:21:20.647 [2024-07-15 02:23:08.901554] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:20.647 [2024-07-15 02:23:08.901606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:08.901657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:08.901688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:08.901728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:08.901763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:08.901778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.647 [2024-07-15 02:23:08.904387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.647 [2024-07-15 02:23:08.904427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eedea0 (9): Bad file descriptor 00:21:20.647 [2024-07-15 02:23:08.934058] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.647 [2024-07-15 02:23:13.421466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:13.421511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.421532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:13.421547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.421563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:13.421577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.421592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.647 [2024-07-15 02:23:13.421607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.421621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eedea0 is same with the state(5) to be set 00:21:20.647 [2024-07-15 02:23:13.427946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.647 [2024-07-15 02:23:13.428278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.647 [2024-07-15 02:23:13.428292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.428968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.648 [2024-07-15 02:23:13.429649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.648 [2024-07-15 02:23:13.429666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.429971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.429987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.649 [2024-07-15 02:23:13.430748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.649 [2024-07-15 02:23:13.430827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.649 [2024-07-15 02:23:13.430841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.430857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.430879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.430897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.430912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.430929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.430944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.430961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.430976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.430993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.650 [2024-07-15 02:23:13.431811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.431980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.431994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.432010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.432025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.432041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.432077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.650 [2024-07-15 02:23:13.432092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.432108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c2200 is same with the state(5) to be set 00:21:20.650 [2024-07-15 02:23:13.432125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.650 [2024-07-15 02:23:13.432137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.650 [2024-07-15 02:23:13.432149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116656 len:8 PRP1 0x0 PRP2 0x0 00:21:20.650 [2024-07-15 02:23:13.432163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.650 [2024-07-15 02:23:13.432222] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c2200 was disconnected and freed. reset controller. 00:21:20.650 [2024-07-15 02:23:13.432240] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:20.650 [2024-07-15 02:23:13.432261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.650 [2024-07-15 02:23:13.432307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eedea0 (9): Bad file descriptor 00:21:20.651 [2024-07-15 02:23:13.434714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.651 [2024-07-15 02:23:13.468361] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.651 00:21:20.651 Latency(us) 00:21:20.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.651 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.651 Verification LBA range: start 0x0 length 0x4000 00:21:20.651 NVMe0n1 : 15.01 13557.96 52.96 324.46 0.00 9202.70 577.16 20137.43 00:21:20.651 =================================================================================================================== 00:21:20.651 Total : 13557.96 52.96 324.46 0.00 9202.70 577.16 20137.43 00:21:20.651 Received shutdown signal, test time was about 15.000000 seconds 00:21:20.651 00:21:20.651 Latency(us) 00:21:20.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.651 =================================================================================================================== 00:21:20.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.651 02:23:19 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:20.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.651 02:23:19 -- host/failover.sh@65 -- # count=3 00:21:20.651 02:23:19 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:20.651 02:23:19 -- host/failover.sh@73 -- # bdevperf_pid=94825 00:21:20.651 02:23:19 -- host/failover.sh@75 -- # waitforlisten 94825 /var/tmp/bdevperf.sock 00:21:20.651 02:23:19 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:20.651 02:23:19 -- common/autotest_common.sh@819 -- # '[' -z 94825 ']' 00:21:20.651 02:23:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.651 02:23:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.651 02:23:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.651 02:23:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.651 02:23:19 -- common/autotest_common.sh@10 -- # set +x 00:21:20.909 02:23:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.909 02:23:20 -- common/autotest_common.sh@852 -- # return 0 00:21:20.909 02:23:20 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.168 [2024-07-15 02:23:20.675567] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.168 02:23:20 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:21.426 [2024-07-15 02:23:20.923798] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:21.426 02:23:20 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:21.687 NVMe0n1 00:21:21.687 02:23:21 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.252 00:21:22.252 02:23:21 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.511 00:21:22.511 02:23:21 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.511 02:23:21 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:22.768 02:23:22 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.026 02:23:22 -- host/failover.sh@87 -- # sleep 3 00:21:26.305 02:23:25 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.305 02:23:25 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:26.305 02:23:25 -- host/failover.sh@90 -- # run_test_pid=94963 00:21:26.305 02:23:25 -- host/failover.sh@92 -- # wait 94963 00:21:26.305 02:23:25 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.236 0 00:21:27.236 02:23:26 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:27.236 [2024-07-15 02:23:19.462115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:27.236 [2024-07-15 02:23:19.462730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94825 ] 00:21:27.236 [2024-07-15 02:23:19.598610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.236 [2024-07-15 02:23:19.684436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.236 [2024-07-15 02:23:22.333183] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:27.236 [2024-07-15 02:23:22.333317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.236 [2024-07-15 02:23:22.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.236 [2024-07-15 02:23:22.333369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.236 [2024-07-15 02:23:22.333382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.236 [2024-07-15 02:23:22.333395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.236 [2024-07-15 02:23:22.333419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.236 [2024-07-15 02:23:22.333431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.236 [2024-07-15 02:23:22.333443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.236 [2024-07-15 02:23:22.333456] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.236 [2024-07-15 02:23:22.333499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.236 [2024-07-15 02:23:22.333528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175bea0 (9): Bad file descriptor 00:21:27.236 [2024-07-15 02:23:22.336805] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:27.236 Running I/O for 1 seconds... 00:21:27.236 00:21:27.236 Latency(us) 00:21:27.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.236 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:27.236 Verification LBA range: start 0x0 length 0x4000 00:21:27.236 NVMe0n1 : 1.01 13792.23 53.88 0.00 0.00 9238.80 912.29 10307.03 00:21:27.236 =================================================================================================================== 00:21:27.236 Total : 13792.23 53.88 0.00 0.00 9238.80 912.29 10307.03 00:21:27.236 02:23:26 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.236 02:23:26 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:27.494 02:23:26 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.752 02:23:27 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.752 02:23:27 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:28.009 02:23:27 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.266 02:23:27 -- host/failover.sh@101 -- # sleep 3 00:21:31.546 02:23:30 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.546 02:23:30 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:31.546 02:23:30 -- host/failover.sh@108 -- # killprocess 94825 00:21:31.546 02:23:30 -- common/autotest_common.sh@926 -- # '[' -z 94825 ']' 00:21:31.546 02:23:30 -- common/autotest_common.sh@930 -- # kill -0 94825 00:21:31.546 02:23:30 -- common/autotest_common.sh@931 -- # uname 00:21:31.546 02:23:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:31.546 02:23:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94825 00:21:31.546 killing process with pid 94825 00:21:31.546 02:23:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:31.546 02:23:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:31.546 02:23:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94825' 00:21:31.546 02:23:30 -- common/autotest_common.sh@945 -- # kill 94825 00:21:31.546 02:23:30 -- common/autotest_common.sh@950 -- # wait 94825 00:21:31.804 02:23:31 -- host/failover.sh@110 -- # sync 00:21:31.804 02:23:31 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.062 02:23:31 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:32.062 02:23:31 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:32.062 02:23:31 -- host/failover.sh@116 -- # nvmftestfini 00:21:32.062 02:23:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.062 02:23:31 -- nvmf/common.sh@116 -- # sync 00:21:32.062 02:23:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.062 02:23:31 -- nvmf/common.sh@119 -- # set +e 00:21:32.062 02:23:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.062 02:23:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.062 rmmod nvme_tcp 00:21:32.062 rmmod nvme_fabrics 00:21:32.062 rmmod nvme_keyring 00:21:32.062 02:23:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.062 02:23:31 -- nvmf/common.sh@123 -- # set -e 00:21:32.062 02:23:31 -- nvmf/common.sh@124 -- # return 0 00:21:32.062 02:23:31 -- nvmf/common.sh@477 -- # '[' -n 94461 ']' 00:21:32.062 02:23:31 -- nvmf/common.sh@478 -- # killprocess 94461 00:21:32.062 02:23:31 -- common/autotest_common.sh@926 -- # '[' -z 94461 ']' 00:21:32.062 02:23:31 -- common/autotest_common.sh@930 -- # kill -0 94461 00:21:32.062 02:23:31 -- common/autotest_common.sh@931 -- # uname 00:21:32.062 02:23:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:32.062 02:23:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94461 00:21:32.062 killing process with pid 94461 00:21:32.062 02:23:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:32.062 02:23:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:32.062 02:23:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94461' 00:21:32.062 02:23:31 -- common/autotest_common.sh@945 -- # kill 94461 00:21:32.062 02:23:31 -- common/autotest_common.sh@950 -- # wait 94461 00:21:32.321 02:23:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:32.321 02:23:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:32.321 02:23:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:32.321 02:23:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.321 02:23:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:32.321 02:23:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.321 02:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.321 02:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.321 02:23:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:32.321 00:21:32.321 real 0m32.596s 00:21:32.321 user 2m6.618s 00:21:32.321 sys 0m4.785s 00:21:32.321 02:23:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.321 02:23:31 -- common/autotest_common.sh@10 -- # set +x 00:21:32.321 ************************************ 00:21:32.321 END TEST nvmf_failover 00:21:32.321 ************************************ 00:21:32.321 02:23:31 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:32.321 02:23:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:32.321 02:23:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:32.321 02:23:31 -- common/autotest_common.sh@10 -- # set +x 00:21:32.580 ************************************ 00:21:32.580 START TEST nvmf_discovery 00:21:32.580 ************************************ 00:21:32.580 02:23:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:32.580 * Looking for test storage... 00:21:32.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:32.580 02:23:31 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:32.580 02:23:31 -- nvmf/common.sh@7 -- # uname -s 00:21:32.580 02:23:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.580 02:23:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.580 02:23:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.580 02:23:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.580 02:23:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.580 02:23:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.580 02:23:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.581 02:23:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.581 02:23:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.581 02:23:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.581 02:23:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:21:32.581 02:23:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:21:32.581 02:23:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.581 02:23:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.581 02:23:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:32.581 02:23:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:32.581 02:23:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.581 02:23:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.581 02:23:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.581 02:23:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.581 02:23:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.581 02:23:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.581 02:23:31 -- paths/export.sh@5 -- # export PATH 00:21:32.581 02:23:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.581 02:23:31 -- nvmf/common.sh@46 -- # : 0 00:21:32.581 02:23:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:32.581 02:23:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:32.581 02:23:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:32.581 02:23:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.581 02:23:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.581 02:23:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:32.581 02:23:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:32.581 02:23:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:32.581 02:23:31 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:32.581 02:23:31 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:32.581 02:23:31 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:32.581 02:23:31 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:32.581 02:23:31 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:32.581 02:23:31 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:32.581 02:23:31 -- host/discovery.sh@25 -- # nvmftestinit 00:21:32.581 02:23:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:32.581 02:23:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.581 02:23:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:32.581 02:23:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:32.581 02:23:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:32.581 02:23:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.581 02:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.581 02:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.581 02:23:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:32.581 02:23:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:32.581 02:23:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:32.581 02:23:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:32.581 02:23:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:32.581 02:23:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:32.581 02:23:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.581 02:23:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.581 02:23:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:32.581 02:23:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:32.581 02:23:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:32.581 02:23:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:32.581 02:23:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:32.581 02:23:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.581 02:23:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:32.581 02:23:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:32.581 02:23:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:32.581 02:23:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:32.581 02:23:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:32.581 02:23:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:32.581 Cannot find device "nvmf_tgt_br" 00:21:32.581 02:23:32 -- nvmf/common.sh@154 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:32.581 Cannot find device "nvmf_tgt_br2" 00:21:32.581 02:23:32 -- nvmf/common.sh@155 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:32.581 02:23:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:32.581 Cannot find device "nvmf_tgt_br" 00:21:32.581 02:23:32 -- nvmf/common.sh@157 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:32.581 Cannot find device "nvmf_tgt_br2" 00:21:32.581 02:23:32 -- nvmf/common.sh@158 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:32.581 02:23:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:32.581 02:23:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:32.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.581 02:23:32 -- nvmf/common.sh@161 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:32.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:32.581 02:23:32 -- nvmf/common.sh@162 -- # true 00:21:32.581 02:23:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:32.581 02:23:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:32.840 02:23:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:32.840 02:23:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:32.840 02:23:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:32.840 02:23:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:32.840 02:23:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:32.840 02:23:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:32.840 02:23:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:32.840 02:23:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:32.840 02:23:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:32.840 02:23:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:32.840 02:23:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:32.840 02:23:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:32.840 02:23:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:32.840 02:23:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:32.840 02:23:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:32.840 02:23:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:32.840 02:23:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:32.840 02:23:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:32.840 02:23:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:32.840 02:23:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:32.840 02:23:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:32.840 02:23:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:32.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:32.840 00:21:32.840 --- 10.0.0.2 ping statistics --- 00:21:32.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.840 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:32.840 02:23:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:32.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:32.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:21:32.840 00:21:32.840 --- 10.0.0.3 ping statistics --- 00:21:32.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.840 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:32.840 02:23:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:32.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:32.840 00:21:32.840 --- 10.0.0.1 ping statistics --- 00:21:32.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.840 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:32.840 02:23:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.840 02:23:32 -- nvmf/common.sh@421 -- # return 0 00:21:32.840 02:23:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:32.840 02:23:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.840 02:23:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:32.840 02:23:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:32.840 02:23:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.840 02:23:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:32.840 02:23:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:32.840 02:23:32 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:32.840 02:23:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:32.840 02:23:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:32.840 02:23:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.840 02:23:32 -- nvmf/common.sh@469 -- # nvmfpid=95257 00:21:32.840 02:23:32 -- nvmf/common.sh@470 -- # waitforlisten 95257 00:21:32.840 02:23:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.840 02:23:32 -- common/autotest_common.sh@819 -- # '[' -z 95257 ']' 00:21:32.840 02:23:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.840 02:23:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:32.840 02:23:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.840 02:23:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:32.840 02:23:32 -- common/autotest_common.sh@10 -- # set +x 00:21:32.840 [2024-07-15 02:23:32.390364] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:32.840 [2024-07-15 02:23:32.391036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.100 [2024-07-15 02:23:32.527007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.100 [2024-07-15 02:23:32.615112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.100 [2024-07-15 02:23:32.615270] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.100 [2024-07-15 02:23:32.615283] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.100 [2024-07-15 02:23:32.615292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.100 [2024-07-15 02:23:32.615322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.034 02:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.034 02:23:33 -- common/autotest_common.sh@852 -- # return 0 00:21:34.034 02:23:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:34.034 02:23:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 02:23:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.034 02:23:33 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.034 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 [2024-07-15 02:23:33.336682] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.034 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.034 02:23:33 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:34.034 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 [2024-07-15 02:23:33.344760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:34.034 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.034 02:23:33 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:34.034 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 null0 00:21:34.034 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.034 02:23:33 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:34.034 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 null1 00:21:34.034 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.034 02:23:33 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:34.034 02:23:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 02:23:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.034 02:23:33 -- host/discovery.sh@45 -- # hostpid=95307 00:21:34.034 02:23:33 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:34.034 02:23:33 -- host/discovery.sh@46 -- # waitforlisten 95307 /tmp/host.sock 00:21:34.034 02:23:33 -- common/autotest_common.sh@819 -- # '[' -z 95307 ']' 00:21:34.034 02:23:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:34.034 02:23:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:34.034 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:34.034 02:23:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:34.034 02:23:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:34.034 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:34.034 [2024-07-15 02:23:33.431531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:34.034 [2024-07-15 02:23:33.431655] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95307 ] 00:21:34.034 [2024-07-15 02:23:33.571789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.299 [2024-07-15 02:23:33.672249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:34.299 [2024-07-15 02:23:33.672411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.879 02:23:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.879 02:23:34 -- common/autotest_common.sh@852 -- # return 0 00:21:34.879 02:23:34 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.879 02:23:34 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:34.879 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.879 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:34.879 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.879 02:23:34 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:34.879 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:34.879 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:34.879 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:34.879 02:23:34 -- host/discovery.sh@72 -- # notify_id=0 00:21:34.879 02:23:34 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:34.879 02:23:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:34.879 02:23:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # sort 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # xargs 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:35.137 02:23:34 -- host/discovery.sh@79 -- # get_bdev_list 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # xargs 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # sort 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:35.137 02:23:34 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # xargs 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # sort 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:35.137 02:23:34 -- host/discovery.sh@83 -- # get_bdev_list 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # xargs 00:21:35.137 02:23:34 -- host/discovery.sh@55 -- # sort 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:35.137 02:23:34 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.137 02:23:34 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.137 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.137 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # sort 00:21:35.137 02:23:34 -- host/discovery.sh@59 -- # xargs 00:21:35.137 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:35.395 02:23:34 -- host/discovery.sh@87 -- # get_bdev_list 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # sort 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # xargs 00:21:35.395 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:35.395 02:23:34 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 [2024-07-15 02:23:34.781195] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.395 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:35.395 02:23:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 02:23:34 -- host/discovery.sh@59 -- # sort 00:21:35.395 02:23:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.395 02:23:34 -- host/discovery.sh@59 -- # xargs 00:21:35.395 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:35.395 02:23:34 -- host/discovery.sh@93 -- # get_bdev_list 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # sort 00:21:35.395 02:23:34 -- host/discovery.sh@55 -- # xargs 00:21:35.395 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:35.395 02:23:34 -- host/discovery.sh@94 -- # get_notification_count 00:21:35.395 02:23:34 -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.395 02:23:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.395 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@74 -- # notification_count=0 00:21:35.395 02:23:34 -- host/discovery.sh@75 -- # notify_id=0 00:21:35.395 02:23:34 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:35.395 02:23:34 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:35.395 02:23:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:35.395 02:23:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.653 02:23:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:35.653 02:23:34 -- host/discovery.sh@100 -- # sleep 1 00:21:35.911 [2024-07-15 02:23:35.440108] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:35.911 [2024-07-15 02:23:35.440181] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:35.911 [2024-07-15 02:23:35.440203] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:36.169 [2024-07-15 02:23:35.526237] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:36.169 [2024-07-15 02:23:35.582352] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:36.169 [2024-07-15 02:23:35.582436] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:36.427 02:23:35 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:36.427 02:23:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.427 02:23:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.427 02:23:35 -- host/discovery.sh@59 -- # sort 00:21:36.427 02:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.427 02:23:35 -- host/discovery.sh@59 -- # xargs 00:21:36.427 02:23:35 -- common/autotest_common.sh@10 -- # set +x 00:21:36.427 02:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@102 -- # get_bdev_list 00:21:36.685 02:23:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.685 02:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.685 02:23:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 02:23:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.685 02:23:36 -- host/discovery.sh@55 -- # xargs 00:21:36.685 02:23:36 -- host/discovery.sh@55 -- # sort 00:21:36.685 02:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:36.685 02:23:36 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.685 02:23:36 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.685 02:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.685 02:23:36 -- host/discovery.sh@63 -- # sort -n 00:21:36.685 02:23:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 02:23:36 -- host/discovery.sh@63 -- # xargs 00:21:36.685 02:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@104 -- # get_notification_count 00:21:36.685 02:23:36 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:36.685 02:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.685 02:23:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 02:23:36 -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.685 02:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@74 -- # notification_count=1 00:21:36.685 02:23:36 -- host/discovery.sh@75 -- # notify_id=1 00:21:36.685 02:23:36 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:36.685 02:23:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.685 02:23:36 -- common/autotest_common.sh@10 -- # set +x 00:21:36.685 02:23:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.685 02:23:36 -- host/discovery.sh@109 -- # sleep 1 00:21:38.058 02:23:37 -- host/discovery.sh@110 -- # get_bdev_list 00:21:38.059 02:23:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.059 02:23:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.059 02:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.059 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:38.059 02:23:37 -- host/discovery.sh@55 -- # sort 00:21:38.059 02:23:37 -- host/discovery.sh@55 -- # xargs 00:21:38.059 02:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.059 02:23:37 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.059 02:23:37 -- host/discovery.sh@111 -- # get_notification_count 00:21:38.059 02:23:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:38.059 02:23:37 -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.059 02:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.059 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:38.059 02:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.059 02:23:37 -- host/discovery.sh@74 -- # notification_count=1 00:21:38.059 02:23:37 -- host/discovery.sh@75 -- # notify_id=2 00:21:38.059 02:23:37 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:38.059 02:23:37 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:38.059 02:23:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.059 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:21:38.059 [2024-07-15 02:23:37.298334] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.059 [2024-07-15 02:23:37.299031] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:38.059 [2024-07-15 02:23:37.299067] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.059 02:23:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.059 02:23:37 -- host/discovery.sh@117 -- # sleep 1 00:21:38.059 [2024-07-15 02:23:37.385090] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:38.059 [2024-07-15 02:23:37.442414] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:38.059 [2024-07-15 02:23:37.442478] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:38.059 [2024-07-15 02:23:37.442504] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.996 02:23:38 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:38.996 02:23:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.996 02:23:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.996 02:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.996 02:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.996 02:23:38 -- host/discovery.sh@59 -- # sort 00:21:38.996 02:23:38 -- host/discovery.sh@59 -- # xargs 00:21:38.996 02:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@119 -- # get_bdev_list 00:21:38.996 02:23:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.996 02:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.996 02:23:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.996 02:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.996 02:23:38 -- host/discovery.sh@55 -- # sort 00:21:38.996 02:23:38 -- host/discovery.sh@55 -- # xargs 00:21:38.996 02:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:38.996 02:23:38 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:38.996 02:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.996 02:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.996 02:23:38 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:38.996 02:23:38 -- host/discovery.sh@63 -- # sort -n 00:21:38.996 02:23:38 -- host/discovery.sh@63 -- # xargs 00:21:38.996 02:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@121 -- # get_notification_count 00:21:38.996 02:23:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:38.996 02:23:38 -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.996 02:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.996 02:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.996 02:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@74 -- # notification_count=0 00:21:38.996 02:23:38 -- host/discovery.sh@75 -- # notify_id=2 00:21:38.996 02:23:38 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:38.996 02:23:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.996 02:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:38.996 [2024-07-15 02:23:38.515710] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:38.996 [2024-07-15 02:23:38.515757] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.996 02:23:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.996 02:23:38 -- host/discovery.sh@127 -- # sleep 1 00:21:38.996 [2024-07-15 02:23:38.520485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.996 [2024-07-15 02:23:38.520527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.996 [2024-07-15 02:23:38.520542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.996 [2024-07-15 02:23:38.520553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.996 [2024-07-15 02:23:38.520565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.996 [2024-07-15 02:23:38.520576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.996 [2024-07-15 02:23:38.520588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.996 [2024-07-15 02:23:38.520610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.996 [2024-07-15 02:23:38.520624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:38.996 [2024-07-15 02:23:38.530434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:38.996 [2024-07-15 02:23:38.540454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.996 [2024-07-15 02:23:38.540600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.996 [2024-07-15 02:23:38.540689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.996 [2024-07-15 02:23:38.540711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:38.996 [2024-07-15 02:23:38.540724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:38.996 [2024-07-15 02:23:38.540745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:38.996 [2024-07-15 02:23:38.540779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.996 [2024-07-15 02:23:38.540792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.996 [2024-07-15 02:23:38.540804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.996 [2024-07-15 02:23:38.540822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.996 [2024-07-15 02:23:38.550532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:38.996 [2024-07-15 02:23:38.550671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.996 [2024-07-15 02:23:38.550727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.996 [2024-07-15 02:23:38.550780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:38.996 [2024-07-15 02:23:38.550794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:38.996 [2024-07-15 02:23:38.550814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:38.996 [2024-07-15 02:23:38.550844] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.996 [2024-07-15 02:23:38.550856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:38.996 [2024-07-15 02:23:38.550867] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.996 [2024-07-15 02:23:38.550884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.560615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.256 [2024-07-15 02:23:38.560722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.560778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.560797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:39.256 [2024-07-15 02:23:38.560809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:39.256 [2024-07-15 02:23:38.560827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:39.256 [2024-07-15 02:23:38.560877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.256 [2024-07-15 02:23:38.560892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.256 [2024-07-15 02:23:38.560901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.256 [2024-07-15 02:23:38.560918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.570689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.256 [2024-07-15 02:23:38.570808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.570861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.570897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:39.256 [2024-07-15 02:23:38.570926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:39.256 [2024-07-15 02:23:38.570946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:39.256 [2024-07-15 02:23:38.570975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.256 [2024-07-15 02:23:38.570987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.256 [2024-07-15 02:23:38.570998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.256 [2024-07-15 02:23:38.571023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.580771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.256 [2024-07-15 02:23:38.580870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.580921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.580939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:39.256 [2024-07-15 02:23:38.580950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:39.256 [2024-07-15 02:23:38.580968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:39.256 [2024-07-15 02:23:38.580996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.256 [2024-07-15 02:23:38.581009] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.256 [2024-07-15 02:23:38.581018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.256 [2024-07-15 02:23:38.581034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.590839] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.256 [2024-07-15 02:23:38.590945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.590998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.591032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:39.256 [2024-07-15 02:23:38.591044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:39.256 [2024-07-15 02:23:38.591061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:39.256 [2024-07-15 02:23:38.591104] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.256 [2024-07-15 02:23:38.591132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.256 [2024-07-15 02:23:38.591159] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.256 [2024-07-15 02:23:38.591176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.600906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:39.256 [2024-07-15 02:23:38.601004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.601055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.256 [2024-07-15 02:23:38.601074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x934ea0 with addr=10.0.0.2, port=4420 00:21:39.256 [2024-07-15 02:23:38.601085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x934ea0 is same with the state(5) to be set 00:21:39.256 [2024-07-15 02:23:38.601103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x934ea0 (9): Bad file descriptor 00:21:39.256 [2024-07-15 02:23:38.601130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:39.256 [2024-07-15 02:23:38.601143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:39.256 [2024-07-15 02:23:38.601153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:39.256 [2024-07-15 02:23:38.601168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.256 [2024-07-15 02:23:38.602140] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:39.256 [2024-07-15 02:23:38.602205] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:40.193 02:23:39 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:40.193 02:23:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:40.193 02:23:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:40.193 02:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.193 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.193 02:23:39 -- host/discovery.sh@59 -- # xargs 00:21:40.193 02:23:39 -- host/discovery.sh@59 -- # sort 00:21:40.193 02:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@129 -- # get_bdev_list 00:21:40.193 02:23:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.193 02:23:39 -- host/discovery.sh@55 -- # sort 00:21:40.193 02:23:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:40.193 02:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.193 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.193 02:23:39 -- host/discovery.sh@55 -- # xargs 00:21:40.193 02:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:40.193 02:23:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:40.193 02:23:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:40.193 02:23:39 -- host/discovery.sh@63 -- # sort -n 00:21:40.193 02:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.193 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.193 02:23:39 -- host/discovery.sh@63 -- # xargs 00:21:40.193 02:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@131 -- # get_notification_count 00:21:40.193 02:23:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:40.193 02:23:39 -- host/discovery.sh@74 -- # jq '. | length' 00:21:40.193 02:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.193 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.193 02:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@74 -- # notification_count=0 00:21:40.193 02:23:39 -- host/discovery.sh@75 -- # notify_id=2 00:21:40.193 02:23:39 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:40.193 02:23:39 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:40.193 02:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.193 02:23:39 -- common/autotest_common.sh@10 -- # set +x 00:21:40.452 02:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.452 02:23:39 -- host/discovery.sh@135 -- # sleep 1 00:21:41.388 02:23:40 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:41.388 02:23:40 -- host/discovery.sh@59 -- # sort 00:21:41.388 02:23:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:41.388 02:23:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:41.388 02:23:40 -- host/discovery.sh@59 -- # xargs 00:21:41.388 02:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.388 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 02:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.388 02:23:40 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:41.388 02:23:40 -- host/discovery.sh@137 -- # get_bdev_list 00:21:41.388 02:23:40 -- host/discovery.sh@55 -- # xargs 00:21:41.388 02:23:40 -- host/discovery.sh@55 -- # sort 00:21:41.388 02:23:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.388 02:23:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.388 02:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.388 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 02:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.388 02:23:40 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:41.388 02:23:40 -- host/discovery.sh@138 -- # get_notification_count 00:21:41.388 02:23:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:41.388 02:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.388 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:21:41.388 02:23:40 -- host/discovery.sh@74 -- # jq '. | length' 00:21:41.388 02:23:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.388 02:23:40 -- host/discovery.sh@74 -- # notification_count=2 00:21:41.388 02:23:40 -- host/discovery.sh@75 -- # notify_id=4 00:21:41.388 02:23:40 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:41.388 02:23:40 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.388 02:23:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.388 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:21:42.767 [2024-07-15 02:23:41.941039] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:42.767 [2024-07-15 02:23:41.941091] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:42.767 [2024-07-15 02:23:41.941113] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:42.767 [2024-07-15 02:23:42.027204] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:42.767 [2024-07-15 02:23:42.086721] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:42.767 [2024-07-15 02:23:42.086813] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:42.767 02:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.767 02:23:42 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.767 02:23:42 -- common/autotest_common.sh@640 -- # local es=0 00:21:42.767 02:23:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.767 02:23:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:42.767 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:42.767 02:23:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:42.767 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:42.767 02:23:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.767 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.767 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.767 2024/07/15 02:23:42 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:42.767 request: 00:21:42.767 { 00:21:42.767 "method": "bdev_nvme_start_discovery", 00:21:42.767 "params": { 00:21:42.767 "name": "nvme", 00:21:42.767 "trtype": "tcp", 00:21:42.767 "traddr": "10.0.0.2", 00:21:42.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.767 "adrfam": "ipv4", 00:21:42.767 "trsvcid": "8009", 00:21:42.767 "wait_for_attach": true 00:21:42.767 } 00:21:42.767 } 00:21:42.767 Got JSON-RPC error response 00:21:42.767 GoRPCClient: error on JSON-RPC call 00:21:42.767 02:23:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:42.767 02:23:42 -- common/autotest_common.sh@643 -- # es=1 00:21:42.767 02:23:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:42.767 02:23:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:42.767 02:23:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:42.767 02:23:42 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:42.767 02:23:42 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.767 02:23:42 -- host/discovery.sh@67 -- # sort 00:21:42.767 02:23:42 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.767 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.767 02:23:42 -- host/discovery.sh@67 -- # xargs 00:21:42.767 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.767 02:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.767 02:23:42 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:42.767 02:23:42 -- host/discovery.sh@147 -- # get_bdev_list 00:21:42.767 02:23:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.767 02:23:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.767 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.767 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # sort 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # xargs 00:21:42.768 02:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.768 02:23:42 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:42.768 02:23:42 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.768 02:23:42 -- common/autotest_common.sh@640 -- # local es=0 00:21:42.768 02:23:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.768 02:23:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:42.768 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:42.768 02:23:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:42.768 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:42.768 02:23:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.768 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.768 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 2024/07/15 02:23:42 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:42.768 request: 00:21:42.768 { 00:21:42.768 "method": "bdev_nvme_start_discovery", 00:21:42.768 "params": { 00:21:42.768 "name": "nvme_second", 00:21:42.768 "trtype": "tcp", 00:21:42.768 "traddr": "10.0.0.2", 00:21:42.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.768 "adrfam": "ipv4", 00:21:42.768 "trsvcid": "8009", 00:21:42.768 "wait_for_attach": true 00:21:42.768 } 00:21:42.768 } 00:21:42.768 Got JSON-RPC error response 00:21:42.768 GoRPCClient: error on JSON-RPC call 00:21:42.768 02:23:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:42.768 02:23:42 -- common/autotest_common.sh@643 -- # es=1 00:21:42.768 02:23:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:42.768 02:23:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:42.768 02:23:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:42.768 02:23:42 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:42.768 02:23:42 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.768 02:23:42 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.768 02:23:42 -- host/discovery.sh@67 -- # sort 00:21:42.768 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.768 02:23:42 -- host/discovery.sh@67 -- # xargs 00:21:42.768 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 02:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.768 02:23:42 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:42.768 02:23:42 -- host/discovery.sh@153 -- # get_bdev_list 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.768 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.768 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # sort 00:21:42.768 02:23:42 -- host/discovery.sh@55 -- # xargs 00:21:42.768 02:23:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.026 02:23:42 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.026 02:23:42 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.026 02:23:42 -- common/autotest_common.sh@640 -- # local es=0 00:21:43.026 02:23:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.026 02:23:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:43.026 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.026 02:23:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:43.026 02:23:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:43.026 02:23:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.026 02:23:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.026 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:21:43.958 [2024-07-15 02:23:43.344439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.958 [2024-07-15 02:23:43.344570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.958 [2024-07-15 02:23:43.344593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97b630 with addr=10.0.0.2, port=8010 00:21:43.958 [2024-07-15 02:23:43.344631] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:43.958 [2024-07-15 02:23:43.344664] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:43.958 [2024-07-15 02:23:43.344675] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:44.928 [2024-07-15 02:23:44.344399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.928 [2024-07-15 02:23:44.344509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.928 [2024-07-15 02:23:44.344532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97b630 with addr=10.0.0.2, port=8010 00:21:44.928 [2024-07-15 02:23:44.344557] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:44.928 [2024-07-15 02:23:44.344569] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:44.928 [2024-07-15 02:23:44.344580] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:45.908 [2024-07-15 02:23:45.344298] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:45.908 2024/07/15 02:23:45 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:45.908 request: 00:21:45.908 { 00:21:45.908 "method": "bdev_nvme_start_discovery", 00:21:45.908 "params": { 00:21:45.908 "name": "nvme_second", 00:21:45.908 "trtype": "tcp", 00:21:45.908 "traddr": "10.0.0.2", 00:21:45.908 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:45.908 "adrfam": "ipv4", 00:21:45.908 "trsvcid": "8010", 00:21:45.908 "attach_timeout_ms": 3000 00:21:45.908 } 00:21:45.908 } 00:21:45.908 Got JSON-RPC error response 00:21:45.908 GoRPCClient: error on JSON-RPC call 00:21:45.908 02:23:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:45.908 02:23:45 -- common/autotest_common.sh@643 -- # es=1 00:21:45.908 02:23:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:45.908 02:23:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:45.908 02:23:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:45.908 02:23:45 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:45.908 02:23:45 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:45.908 02:23:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:45.908 02:23:45 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:45.908 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:21:45.908 02:23:45 -- host/discovery.sh@67 -- # xargs 00:21:45.908 02:23:45 -- host/discovery.sh@67 -- # sort 00:21:45.908 02:23:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:45.908 02:23:45 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:45.908 02:23:45 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:45.908 02:23:45 -- host/discovery.sh@162 -- # kill 95307 00:21:45.908 02:23:45 -- host/discovery.sh@163 -- # nvmftestfini 00:21:45.908 02:23:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:45.908 02:23:45 -- nvmf/common.sh@116 -- # sync 00:21:45.908 02:23:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:45.908 02:23:45 -- nvmf/common.sh@119 -- # set +e 00:21:45.908 02:23:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:45.908 02:23:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:45.908 rmmod nvme_tcp 00:21:45.908 rmmod nvme_fabrics 00:21:46.168 rmmod nvme_keyring 00:21:46.168 02:23:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:46.168 02:23:45 -- nvmf/common.sh@123 -- # set -e 00:21:46.168 02:23:45 -- nvmf/common.sh@124 -- # return 0 00:21:46.168 02:23:45 -- nvmf/common.sh@477 -- # '[' -n 95257 ']' 00:21:46.168 02:23:45 -- nvmf/common.sh@478 -- # killprocess 95257 00:21:46.168 02:23:45 -- common/autotest_common.sh@926 -- # '[' -z 95257 ']' 00:21:46.168 02:23:45 -- common/autotest_common.sh@930 -- # kill -0 95257 00:21:46.168 02:23:45 -- common/autotest_common.sh@931 -- # uname 00:21:46.168 02:23:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:46.168 02:23:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95257 00:21:46.168 killing process with pid 95257 00:21:46.168 02:23:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:46.168 02:23:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:46.168 02:23:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95257' 00:21:46.168 02:23:45 -- common/autotest_common.sh@945 -- # kill 95257 00:21:46.168 02:23:45 -- common/autotest_common.sh@950 -- # wait 95257 00:21:46.427 02:23:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:46.427 02:23:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:46.427 02:23:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:46.427 02:23:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.427 02:23:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:46.427 02:23:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.428 02:23:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.428 02:23:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.428 02:23:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:46.428 00:21:46.428 real 0m13.901s 00:21:46.428 user 0m27.190s 00:21:46.428 sys 0m1.692s 00:21:46.428 02:23:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.428 ************************************ 00:21:46.428 END TEST nvmf_discovery 00:21:46.428 ************************************ 00:21:46.428 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.428 02:23:45 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:46.428 02:23:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:46.428 02:23:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:46.428 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:21:46.428 ************************************ 00:21:46.428 START TEST nvmf_discovery_remove_ifc 00:21:46.428 ************************************ 00:21:46.428 02:23:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:46.428 * Looking for test storage... 00:21:46.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.428 02:23:45 -- nvmf/common.sh@7 -- # uname -s 00:21:46.428 02:23:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.428 02:23:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.428 02:23:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.428 02:23:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.428 02:23:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.428 02:23:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.428 02:23:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.428 02:23:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.428 02:23:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.428 02:23:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:21:46.428 02:23:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:21:46.428 02:23:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.428 02:23:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.428 02:23:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.428 02:23:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.428 02:23:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.428 02:23:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.428 02:23:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.428 02:23:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.428 02:23:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.428 02:23:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.428 02:23:45 -- paths/export.sh@5 -- # export PATH 00:21:46.428 02:23:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.428 02:23:45 -- nvmf/common.sh@46 -- # : 0 00:21:46.428 02:23:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:46.428 02:23:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:46.428 02:23:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:46.428 02:23:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.428 02:23:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.428 02:23:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:46.428 02:23:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:46.428 02:23:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:46.428 02:23:45 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:46.428 02:23:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:46.428 02:23:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.428 02:23:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:46.428 02:23:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:46.428 02:23:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:46.428 02:23:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.428 02:23:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.428 02:23:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.428 02:23:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:46.428 02:23:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:46.428 02:23:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.428 02:23:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.428 02:23:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.428 02:23:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:46.428 02:23:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.428 02:23:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.428 02:23:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.428 02:23:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.428 02:23:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.428 02:23:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.428 02:23:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.428 02:23:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.428 02:23:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:46.428 02:23:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:46.428 Cannot find device "nvmf_tgt_br" 00:21:46.428 02:23:45 -- nvmf/common.sh@154 -- # true 00:21:46.428 02:23:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.428 Cannot find device "nvmf_tgt_br2" 00:21:46.428 02:23:45 -- nvmf/common.sh@155 -- # true 00:21:46.428 02:23:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:46.428 02:23:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:46.687 Cannot find device "nvmf_tgt_br" 00:21:46.687 02:23:45 -- nvmf/common.sh@157 -- # true 00:21:46.687 02:23:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:46.687 Cannot find device "nvmf_tgt_br2" 00:21:46.687 02:23:45 -- nvmf/common.sh@158 -- # true 00:21:46.687 02:23:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:46.687 02:23:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:46.687 02:23:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.687 02:23:46 -- nvmf/common.sh@161 -- # true 00:21:46.687 02:23:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.687 02:23:46 -- nvmf/common.sh@162 -- # true 00:21:46.687 02:23:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.687 02:23:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.687 02:23:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.687 02:23:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.687 02:23:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.687 02:23:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.687 02:23:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.687 02:23:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.687 02:23:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.687 02:23:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:46.687 02:23:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:46.687 02:23:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:46.687 02:23:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:46.687 02:23:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.687 02:23:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.687 02:23:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.687 02:23:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:46.687 02:23:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:46.687 02:23:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.687 02:23:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.687 02:23:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.687 02:23:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.687 02:23:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.687 02:23:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:46.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:46.945 00:21:46.945 --- 10.0.0.2 ping statistics --- 00:21:46.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.945 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:46.945 02:23:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:46.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:21:46.945 00:21:46.945 --- 10.0.0.3 ping statistics --- 00:21:46.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.946 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:46.946 02:23:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:46.946 00:21:46.946 --- 10.0.0.1 ping statistics --- 00:21:46.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.946 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:46.946 02:23:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.946 02:23:46 -- nvmf/common.sh@421 -- # return 0 00:21:46.946 02:23:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:46.946 02:23:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.946 02:23:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:46.946 02:23:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:46.946 02:23:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.946 02:23:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:46.946 02:23:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:46.946 02:23:46 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:46.946 02:23:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:46.946 02:23:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:46.946 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:21:46.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.946 02:23:46 -- nvmf/common.sh@469 -- # nvmfpid=95806 00:21:46.946 02:23:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.946 02:23:46 -- nvmf/common.sh@470 -- # waitforlisten 95806 00:21:46.946 02:23:46 -- common/autotest_common.sh@819 -- # '[' -z 95806 ']' 00:21:46.946 02:23:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.946 02:23:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.946 02:23:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.946 02:23:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.946 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:21:46.946 [2024-07-15 02:23:46.337587] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:46.946 [2024-07-15 02:23:46.337699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.946 [2024-07-15 02:23:46.473763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.204 [2024-07-15 02:23:46.561872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.204 [2024-07-15 02:23:46.562172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.204 [2024-07-15 02:23:46.562294] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.204 [2024-07-15 02:23:46.562430] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.204 [2024-07-15 02:23:46.562544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.770 02:23:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.770 02:23:47 -- common/autotest_common.sh@852 -- # return 0 00:21:47.770 02:23:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.770 02:23:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.770 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.029 02:23:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.029 02:23:47 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:48.029 02:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.029 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.029 [2024-07-15 02:23:47.372073] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.029 [2024-07-15 02:23:47.380133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:48.029 null0 00:21:48.029 [2024-07-15 02:23:47.412088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.029 02:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.029 02:23:47 -- host/discovery_remove_ifc.sh@59 -- # hostpid=95856 00:21:48.029 02:23:47 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 95856 /tmp/host.sock 00:21:48.029 02:23:47 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:48.029 02:23:47 -- common/autotest_common.sh@819 -- # '[' -z 95856 ']' 00:21:48.029 02:23:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:48.029 02:23:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.029 02:23:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:48.029 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:48.029 02:23:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.029 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.029 [2024-07-15 02:23:47.489055] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:48.029 [2024-07-15 02:23:47.489425] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95856 ] 00:21:48.287 [2024-07-15 02:23:47.630314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.287 [2024-07-15 02:23:47.720456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:48.287 [2024-07-15 02:23:47.720794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.287 02:23:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.287 02:23:47 -- common/autotest_common.sh@852 -- # return 0 00:21:48.287 02:23:47 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.287 02:23:47 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:48.287 02:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.287 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.287 02:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.287 02:23:47 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:48.287 02:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.287 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:48.545 02:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.545 02:23:47 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:48.545 02:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.545 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:49.481 [2024-07-15 02:23:48.885042] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:49.481 [2024-07-15 02:23:48.885095] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:49.481 [2024-07-15 02:23:48.885126] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.481 [2024-07-15 02:23:48.971260] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:49.481 [2024-07-15 02:23:49.027327] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:49.481 [2024-07-15 02:23:49.027387] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:49.481 [2024-07-15 02:23:49.027415] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:49.481 [2024-07-15 02:23:49.027433] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.482 [2024-07-15 02:23:49.027460] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.482 02:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.482 [2024-07-15 02:23:49.033622] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18707d0 was disconnected and freed. delete nvme_qpair. 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.482 02:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.482 02:23:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.482 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.740 02:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.740 02:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.740 02:23:49 -- common/autotest_common.sh@10 -- # set +x 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.740 02:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:49.740 02:23:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.675 02:23:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:50.675 02:23:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.675 02:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.675 02:23:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:50.675 02:23:50 -- common/autotest_common.sh@10 -- # set +x 00:21:50.675 02:23:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:50.675 02:23:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:50.675 02:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.932 02:23:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:50.932 02:23:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.864 02:23:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.864 02:23:51 -- common/autotest_common.sh@10 -- # set +x 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.864 02:23:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:51.864 02:23:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:52.795 02:23:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.795 02:23:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.795 02:23:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.795 02:23:52 -- common/autotest_common.sh@10 -- # set +x 00:21:52.795 02:23:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.795 02:23:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.796 02:23:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.796 02:23:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.053 02:23:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:53.053 02:23:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.985 02:23:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.985 02:23:53 -- common/autotest_common.sh@10 -- # set +x 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:53.985 02:23:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:53.985 02:23:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:54.916 02:23:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:54.916 02:23:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.916 02:23:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:54.916 02:23:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:54.916 02:23:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.916 02:23:54 -- common/autotest_common.sh@10 -- # set +x 00:21:54.916 02:23:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:54.916 02:23:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.916 [2024-07-15 02:23:54.455132] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:54.916 [2024-07-15 02:23:54.455216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.916 [2024-07-15 02:23:54.455233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.916 [2024-07-15 02:23:54.455245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.916 [2024-07-15 02:23:54.455255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.916 [2024-07-15 02:23:54.455265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.916 [2024-07-15 02:23:54.455276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.916 [2024-07-15 02:23:54.455286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.916 [2024-07-15 02:23:54.455295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.916 [2024-07-15 02:23:54.455305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.916 [2024-07-15 02:23:54.455314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.916 [2024-07-15 02:23:54.455323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18384c0 is same with the state(5) to be set 00:21:54.916 [2024-07-15 02:23:54.465128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18384c0 (9): Bad file descriptor 00:21:55.173 [2024-07-15 02:23:54.475171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:55.173 02:23:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:55.173 02:23:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:56.106 [2024-07-15 02:23:55.494767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:56.106 02:23:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:56.106 02:23:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.106 02:23:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:56.106 02:23:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.106 02:23:55 -- common/autotest_common.sh@10 -- # set +x 00:21:56.106 02:23:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:56.106 02:23:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:57.041 [2024-07-15 02:23:56.518768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:57.041 [2024-07-15 02:23:56.518928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18384c0 with addr=10.0.0.2, port=4420 00:21:57.041 [2024-07-15 02:23:56.518967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18384c0 is same with the state(5) to be set 00:21:57.041 [2024-07-15 02:23:56.519027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:57.041 [2024-07-15 02:23:56.519052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:57.041 [2024-07-15 02:23:56.519073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:57.041 [2024-07-15 02:23:56.519095] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:57.041 [2024-07-15 02:23:56.519987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18384c0 (9): Bad file descriptor 00:21:57.041 [2024-07-15 02:23:56.520066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.041 [2024-07-15 02:23:56.520132] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:57.041 [2024-07-15 02:23:56.520205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.041 [2024-07-15 02:23:56.520235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.041 [2024-07-15 02:23:56.520265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.041 [2024-07-15 02:23:56.520286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.041 [2024-07-15 02:23:56.520309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.041 [2024-07-15 02:23:56.520331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.041 [2024-07-15 02:23:56.520355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.041 [2024-07-15 02:23:56.520376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.041 [2024-07-15 02:23:56.520400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.041 [2024-07-15 02:23:56.520421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.041 [2024-07-15 02:23:56.520443] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:57.041 [2024-07-15 02:23:56.520475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18379b0 (9): Bad file descriptor 00:21:57.041 [2024-07-15 02:23:56.521097] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:57.041 [2024-07-15 02:23:56.521131] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:57.041 02:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.041 02:23:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:57.041 02:23:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.417 02:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.417 02:23:57 -- common/autotest_common.sh@10 -- # set +x 00:21:58.417 02:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.417 02:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.417 02:23:57 -- common/autotest_common.sh@10 -- # set +x 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.417 02:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:58.417 02:23:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:58.983 [2024-07-15 02:23:58.526793] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:58.984 [2024-07-15 02:23:58.526835] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:58.984 [2024-07-15 02:23:58.526854] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.242 [2024-07-15 02:23:58.612925] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:59.242 [2024-07-15 02:23:58.668095] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:59.242 [2024-07-15 02:23:58.668146] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:59.242 [2024-07-15 02:23:58.668170] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:59.242 [2024-07-15 02:23:58.668186] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:59.242 [2024-07-15 02:23:58.668195] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:59.242 [2024-07-15 02:23:58.675440] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x187acc0 was disconnected and freed. delete nvme_qpair. 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:59.242 02:23:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:59.242 02:23:58 -- common/autotest_common.sh@10 -- # set +x 00:21:59.242 02:23:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:59.242 02:23:58 -- host/discovery_remove_ifc.sh@90 -- # killprocess 95856 00:21:59.242 02:23:58 -- common/autotest_common.sh@926 -- # '[' -z 95856 ']' 00:21:59.242 02:23:58 -- common/autotest_common.sh@930 -- # kill -0 95856 00:21:59.242 02:23:58 -- common/autotest_common.sh@931 -- # uname 00:21:59.242 02:23:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.242 02:23:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95856 00:21:59.242 killing process with pid 95856 00:21:59.242 02:23:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:59.242 02:23:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:59.242 02:23:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95856' 00:21:59.242 02:23:58 -- common/autotest_common.sh@945 -- # kill 95856 00:21:59.242 02:23:58 -- common/autotest_common.sh@950 -- # wait 95856 00:21:59.500 02:23:58 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:59.500 02:23:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:59.500 02:23:58 -- nvmf/common.sh@116 -- # sync 00:21:59.500 02:23:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:59.500 02:23:59 -- nvmf/common.sh@119 -- # set +e 00:21:59.500 02:23:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:59.500 02:23:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:59.500 rmmod nvme_tcp 00:21:59.500 rmmod nvme_fabrics 00:21:59.500 rmmod nvme_keyring 00:21:59.758 02:23:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:59.758 02:23:59 -- nvmf/common.sh@123 -- # set -e 00:21:59.758 02:23:59 -- nvmf/common.sh@124 -- # return 0 00:21:59.758 02:23:59 -- nvmf/common.sh@477 -- # '[' -n 95806 ']' 00:21:59.758 02:23:59 -- nvmf/common.sh@478 -- # killprocess 95806 00:21:59.758 02:23:59 -- common/autotest_common.sh@926 -- # '[' -z 95806 ']' 00:21:59.758 02:23:59 -- common/autotest_common.sh@930 -- # kill -0 95806 00:21:59.758 02:23:59 -- common/autotest_common.sh@931 -- # uname 00:21:59.758 02:23:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.758 02:23:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95806 00:21:59.758 killing process with pid 95806 00:21:59.758 02:23:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:59.758 02:23:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:59.758 02:23:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95806' 00:21:59.758 02:23:59 -- common/autotest_common.sh@945 -- # kill 95806 00:21:59.758 02:23:59 -- common/autotest_common.sh@950 -- # wait 95806 00:21:59.758 02:23:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:59.758 02:23:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:59.758 02:23:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:59.758 02:23:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.758 02:23:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:59.758 02:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.758 02:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.758 02:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.015 02:23:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:00.015 ************************************ 00:22:00.015 END TEST nvmf_discovery_remove_ifc 00:22:00.015 ************************************ 00:22:00.015 00:22:00.015 real 0m13.504s 00:22:00.015 user 0m22.945s 00:22:00.015 sys 0m1.516s 00:22:00.015 02:23:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.015 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:00.015 02:23:59 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:00.015 02:23:59 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:00.015 02:23:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:00.015 02:23:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.015 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:00.015 ************************************ 00:22:00.015 START TEST nvmf_digest 00:22:00.015 ************************************ 00:22:00.015 02:23:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:00.015 * Looking for test storage... 00:22:00.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:00.015 02:23:59 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.015 02:23:59 -- nvmf/common.sh@7 -- # uname -s 00:22:00.015 02:23:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.016 02:23:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.016 02:23:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.016 02:23:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.016 02:23:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.016 02:23:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.016 02:23:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.016 02:23:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.016 02:23:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.016 02:23:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:00.016 02:23:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:00.016 02:23:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.016 02:23:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.016 02:23:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.016 02:23:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.016 02:23:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.016 02:23:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.016 02:23:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.016 02:23:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.016 02:23:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.016 02:23:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.016 02:23:59 -- paths/export.sh@5 -- # export PATH 00:22:00.016 02:23:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.016 02:23:59 -- nvmf/common.sh@46 -- # : 0 00:22:00.016 02:23:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:00.016 02:23:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:00.016 02:23:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:00.016 02:23:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.016 02:23:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.016 02:23:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:00.016 02:23:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:00.016 02:23:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:00.016 02:23:59 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:00.016 02:23:59 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:00.016 02:23:59 -- host/digest.sh@16 -- # runtime=2 00:22:00.016 02:23:59 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:00.016 02:23:59 -- host/digest.sh@132 -- # nvmftestinit 00:22:00.016 02:23:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:00.016 02:23:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.016 02:23:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:00.016 02:23:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:00.016 02:23:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:00.016 02:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.016 02:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.016 02:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.016 02:23:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:00.016 02:23:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:00.016 02:23:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.016 02:23:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.016 02:23:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:00.016 02:23:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:00.016 02:23:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:00.016 02:23:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:00.016 02:23:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:00.016 02:23:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.016 02:23:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:00.016 02:23:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:00.016 02:23:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:00.016 02:23:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:00.016 02:23:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:00.016 02:23:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:00.016 Cannot find device "nvmf_tgt_br" 00:22:00.016 02:23:59 -- nvmf/common.sh@154 -- # true 00:22:00.016 02:23:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:00.016 Cannot find device "nvmf_tgt_br2" 00:22:00.016 02:23:59 -- nvmf/common.sh@155 -- # true 00:22:00.016 02:23:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:00.016 02:23:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:00.016 Cannot find device "nvmf_tgt_br" 00:22:00.016 02:23:59 -- nvmf/common.sh@157 -- # true 00:22:00.016 02:23:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:00.016 Cannot find device "nvmf_tgt_br2" 00:22:00.016 02:23:59 -- nvmf/common.sh@158 -- # true 00:22:00.016 02:23:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:00.274 02:23:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:00.274 02:23:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:00.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.274 02:23:59 -- nvmf/common.sh@161 -- # true 00:22:00.274 02:23:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:00.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.274 02:23:59 -- nvmf/common.sh@162 -- # true 00:22:00.274 02:23:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:00.274 02:23:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:00.274 02:23:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:00.274 02:23:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:00.274 02:23:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:00.274 02:23:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:00.274 02:23:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:00.274 02:23:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:00.274 02:23:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:00.274 02:23:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:00.274 02:23:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:00.274 02:23:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:00.274 02:23:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:00.274 02:23:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:00.274 02:23:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:00.274 02:23:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:00.274 02:23:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:00.274 02:23:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:00.274 02:23:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:00.274 02:23:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:00.274 02:23:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:00.274 02:23:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:00.274 02:23:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:00.274 02:23:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:00.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:22:00.274 00:22:00.274 --- 10.0.0.2 ping statistics --- 00:22:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.274 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:00.274 02:23:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:00.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:00.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:22:00.274 00:22:00.274 --- 10.0.0.3 ping statistics --- 00:22:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.274 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:00.274 02:23:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:00.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:00.274 00:22:00.274 --- 10.0.0.1 ping statistics --- 00:22:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.274 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:00.274 02:23:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.274 02:23:59 -- nvmf/common.sh@421 -- # return 0 00:22:00.274 02:23:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.274 02:23:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.274 02:23:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.274 02:23:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.274 02:23:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.274 02:23:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.274 02:23:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.274 02:23:59 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:00.274 02:23:59 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:00.274 02:23:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:00.274 02:23:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.274 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:00.274 ************************************ 00:22:00.274 START TEST nvmf_digest_clean 00:22:00.274 ************************************ 00:22:00.274 02:23:59 -- common/autotest_common.sh@1104 -- # run_digest 00:22:00.274 02:23:59 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:00.274 02:23:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.274 02:23:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.274 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:00.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.532 02:23:59 -- nvmf/common.sh@469 -- # nvmfpid=96257 00:22:00.532 02:23:59 -- nvmf/common.sh@470 -- # waitforlisten 96257 00:22:00.532 02:23:59 -- common/autotest_common.sh@819 -- # '[' -z 96257 ']' 00:22:00.532 02:23:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:00.532 02:23:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.532 02:23:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.532 02:23:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.532 02:23:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.532 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:00.532 [2024-07-15 02:23:59.892781] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:00.532 [2024-07-15 02:23:59.892874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.532 [2024-07-15 02:24:00.030004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.790 [2024-07-15 02:24:00.118909] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.790 [2024-07-15 02:24:00.119078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.790 [2024-07-15 02:24:00.119091] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.790 [2024-07-15 02:24:00.119100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.790 [2024-07-15 02:24:00.119121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.356 02:24:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.356 02:24:00 -- common/autotest_common.sh@852 -- # return 0 00:22:01.356 02:24:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:01.356 02:24:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:01.356 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.614 02:24:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.614 02:24:00 -- host/digest.sh@120 -- # common_target_config 00:22:01.614 02:24:00 -- host/digest.sh@43 -- # rpc_cmd 00:22:01.614 02:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.614 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:22:01.614 null0 00:22:01.614 [2024-07-15 02:24:01.022736] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.614 [2024-07-15 02:24:01.046857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.614 02:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.614 02:24:01 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:01.614 02:24:01 -- host/digest.sh@77 -- # local rw bs qd 00:22:01.614 02:24:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:01.614 02:24:01 -- host/digest.sh@80 -- # rw=randread 00:22:01.614 02:24:01 -- host/digest.sh@80 -- # bs=4096 00:22:01.614 02:24:01 -- host/digest.sh@80 -- # qd=128 00:22:01.614 02:24:01 -- host/digest.sh@82 -- # bperfpid=96307 00:22:01.614 02:24:01 -- host/digest.sh@83 -- # waitforlisten 96307 /var/tmp/bperf.sock 00:22:01.614 02:24:01 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:01.614 02:24:01 -- common/autotest_common.sh@819 -- # '[' -z 96307 ']' 00:22:01.614 02:24:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.614 02:24:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.614 02:24:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.614 02:24:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.614 02:24:01 -- common/autotest_common.sh@10 -- # set +x 00:22:01.614 [2024-07-15 02:24:01.106492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:01.614 [2024-07-15 02:24:01.106842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96307 ] 00:22:01.872 [2024-07-15 02:24:01.249154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.872 [2024-07-15 02:24:01.337592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.808 02:24:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.808 02:24:02 -- common/autotest_common.sh@852 -- # return 0 00:22:02.808 02:24:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:02.808 02:24:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:02.808 02:24:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:03.066 02:24:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:03.066 02:24:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:03.344 nvme0n1 00:22:03.344 02:24:02 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:03.344 02:24:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:03.345 Running I/O for 2 seconds... 00:22:05.872 00:22:05.872 Latency(us) 00:22:05.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.872 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:05.872 nvme0n1 : 2.01 21088.62 82.38 0.00 0.00 6063.95 2919.33 13405.09 00:22:05.872 =================================================================================================================== 00:22:05.872 Total : 21088.62 82.38 0.00 0.00 6063.95 2919.33 13405.09 00:22:05.872 0 00:22:05.872 02:24:04 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:05.872 02:24:04 -- host/digest.sh@92 -- # get_accel_stats 00:22:05.872 02:24:04 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:05.872 02:24:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:05.872 02:24:04 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:05.872 | select(.opcode=="crc32c") 00:22:05.872 | "\(.module_name) \(.executed)"' 00:22:05.872 02:24:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:05.872 02:24:05 -- host/digest.sh@93 -- # exp_module=software 00:22:05.872 02:24:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:05.872 02:24:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:05.872 02:24:05 -- host/digest.sh@97 -- # killprocess 96307 00:22:05.872 02:24:05 -- common/autotest_common.sh@926 -- # '[' -z 96307 ']' 00:22:05.872 02:24:05 -- common/autotest_common.sh@930 -- # kill -0 96307 00:22:05.872 02:24:05 -- common/autotest_common.sh@931 -- # uname 00:22:05.872 02:24:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.872 02:24:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96307 00:22:05.872 02:24:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:05.872 killing process with pid 96307 00:22:05.872 02:24:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:05.872 02:24:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96307' 00:22:05.872 Received shutdown signal, test time was about 2.000000 seconds 00:22:05.872 00:22:05.872 Latency(us) 00:22:05.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.872 =================================================================================================================== 00:22:05.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.872 02:24:05 -- common/autotest_common.sh@945 -- # kill 96307 00:22:05.873 02:24:05 -- common/autotest_common.sh@950 -- # wait 96307 00:22:05.873 02:24:05 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:05.873 02:24:05 -- host/digest.sh@77 -- # local rw bs qd 00:22:05.873 02:24:05 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:05.873 02:24:05 -- host/digest.sh@80 -- # rw=randread 00:22:05.873 02:24:05 -- host/digest.sh@80 -- # bs=131072 00:22:05.873 02:24:05 -- host/digest.sh@80 -- # qd=16 00:22:05.873 02:24:05 -- host/digest.sh@82 -- # bperfpid=96399 00:22:05.873 02:24:05 -- host/digest.sh@83 -- # waitforlisten 96399 /var/tmp/bperf.sock 00:22:05.873 02:24:05 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:05.873 02:24:05 -- common/autotest_common.sh@819 -- # '[' -z 96399 ']' 00:22:05.873 02:24:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:05.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:05.873 02:24:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:05.873 02:24:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:05.873 02:24:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:05.873 02:24:05 -- common/autotest_common.sh@10 -- # set +x 00:22:06.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:06.133 Zero copy mechanism will not be used. 00:22:06.133 [2024-07-15 02:24:05.466374] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:06.133 [2024-07-15 02:24:05.466498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96399 ] 00:22:06.133 [2024-07-15 02:24:05.606764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.133 [2024-07-15 02:24:05.687077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.082 02:24:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:07.082 02:24:06 -- common/autotest_common.sh@852 -- # return 0 00:22:07.082 02:24:06 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:07.082 02:24:06 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:07.082 02:24:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:07.340 02:24:06 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.340 02:24:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.598 nvme0n1 00:22:07.598 02:24:06 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:07.598 02:24:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:07.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:07.598 Zero copy mechanism will not be used. 00:22:07.598 Running I/O for 2 seconds... 00:22:10.127 00:22:10.127 Latency(us) 00:22:10.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.127 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:10.127 nvme0n1 : 2.00 9412.16 1176.52 0.00 0.00 1696.94 737.28 12034.79 00:22:10.127 =================================================================================================================== 00:22:10.127 Total : 9412.16 1176.52 0.00 0.00 1696.94 737.28 12034.79 00:22:10.127 0 00:22:10.127 02:24:09 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:10.127 02:24:09 -- host/digest.sh@92 -- # get_accel_stats 00:22:10.127 02:24:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:10.127 02:24:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:10.127 | select(.opcode=="crc32c") 00:22:10.127 | "\(.module_name) \(.executed)"' 00:22:10.127 02:24:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:10.127 02:24:09 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:10.127 02:24:09 -- host/digest.sh@93 -- # exp_module=software 00:22:10.127 02:24:09 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:10.127 02:24:09 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:10.127 02:24:09 -- host/digest.sh@97 -- # killprocess 96399 00:22:10.127 02:24:09 -- common/autotest_common.sh@926 -- # '[' -z 96399 ']' 00:22:10.127 02:24:09 -- common/autotest_common.sh@930 -- # kill -0 96399 00:22:10.127 02:24:09 -- common/autotest_common.sh@931 -- # uname 00:22:10.127 02:24:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:10.127 02:24:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96399 00:22:10.127 killing process with pid 96399 00:22:10.127 Received shutdown signal, test time was about 2.000000 seconds 00:22:10.127 00:22:10.127 Latency(us) 00:22:10.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.127 =================================================================================================================== 00:22:10.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.127 02:24:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:10.127 02:24:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:10.127 02:24:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96399' 00:22:10.127 02:24:09 -- common/autotest_common.sh@945 -- # kill 96399 00:22:10.127 02:24:09 -- common/autotest_common.sh@950 -- # wait 96399 00:22:10.127 02:24:09 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:10.127 02:24:09 -- host/digest.sh@77 -- # local rw bs qd 00:22:10.127 02:24:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:10.127 02:24:09 -- host/digest.sh@80 -- # rw=randwrite 00:22:10.127 02:24:09 -- host/digest.sh@80 -- # bs=4096 00:22:10.127 02:24:09 -- host/digest.sh@80 -- # qd=128 00:22:10.127 02:24:09 -- host/digest.sh@82 -- # bperfpid=96488 00:22:10.127 02:24:09 -- host/digest.sh@83 -- # waitforlisten 96488 /var/tmp/bperf.sock 00:22:10.127 02:24:09 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:10.127 02:24:09 -- common/autotest_common.sh@819 -- # '[' -z 96488 ']' 00:22:10.127 02:24:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:10.127 02:24:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:10.127 02:24:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:10.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:10.127 02:24:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:10.127 02:24:09 -- common/autotest_common.sh@10 -- # set +x 00:22:10.127 [2024-07-15 02:24:09.625115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:10.127 [2024-07-15 02:24:09.625426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96488 ] 00:22:10.387 [2024-07-15 02:24:09.762851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.387 [2024-07-15 02:24:09.849183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.319 02:24:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:11.319 02:24:10 -- common/autotest_common.sh@852 -- # return 0 00:22:11.319 02:24:10 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:11.319 02:24:10 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:11.319 02:24:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:11.319 02:24:10 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:11.319 02:24:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:11.575 nvme0n1 00:22:11.833 02:24:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:11.833 02:24:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:11.833 Running I/O for 2 seconds... 00:22:13.729 00:22:13.729 Latency(us) 00:22:13.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.729 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:13.729 nvme0n1 : 2.00 24722.77 96.57 0.00 0.00 5172.41 2889.54 11141.12 00:22:13.729 =================================================================================================================== 00:22:13.729 Total : 24722.77 96.57 0.00 0.00 5172.41 2889.54 11141.12 00:22:13.729 0 00:22:13.729 02:24:13 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:13.729 02:24:13 -- host/digest.sh@92 -- # get_accel_stats 00:22:13.729 02:24:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:13.729 02:24:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:13.729 02:24:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:13.729 | select(.opcode=="crc32c") 00:22:13.729 | "\(.module_name) \(.executed)"' 00:22:13.986 02:24:13 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:13.986 02:24:13 -- host/digest.sh@93 -- # exp_module=software 00:22:13.986 02:24:13 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:13.986 02:24:13 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:13.986 02:24:13 -- host/digest.sh@97 -- # killprocess 96488 00:22:13.986 02:24:13 -- common/autotest_common.sh@926 -- # '[' -z 96488 ']' 00:22:13.987 02:24:13 -- common/autotest_common.sh@930 -- # kill -0 96488 00:22:13.987 02:24:13 -- common/autotest_common.sh@931 -- # uname 00:22:13.987 02:24:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.987 02:24:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96488 00:22:14.245 killing process with pid 96488 00:22:14.245 Received shutdown signal, test time was about 2.000000 seconds 00:22:14.245 00:22:14.245 Latency(us) 00:22:14.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.245 =================================================================================================================== 00:22:14.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.245 02:24:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:14.245 02:24:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:14.245 02:24:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96488' 00:22:14.245 02:24:13 -- common/autotest_common.sh@945 -- # kill 96488 00:22:14.245 02:24:13 -- common/autotest_common.sh@950 -- # wait 96488 00:22:14.245 02:24:13 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:14.245 02:24:13 -- host/digest.sh@77 -- # local rw bs qd 00:22:14.245 02:24:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:14.245 02:24:13 -- host/digest.sh@80 -- # rw=randwrite 00:22:14.245 02:24:13 -- host/digest.sh@80 -- # bs=131072 00:22:14.245 02:24:13 -- host/digest.sh@80 -- # qd=16 00:22:14.245 02:24:13 -- host/digest.sh@82 -- # bperfpid=96574 00:22:14.245 02:24:13 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:14.245 02:24:13 -- host/digest.sh@83 -- # waitforlisten 96574 /var/tmp/bperf.sock 00:22:14.245 02:24:13 -- common/autotest_common.sh@819 -- # '[' -z 96574 ']' 00:22:14.245 02:24:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:14.245 02:24:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:14.245 02:24:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:14.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:14.245 02:24:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:14.245 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:22:14.245 [2024-07-15 02:24:13.799649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:14.245 [2024-07-15 02:24:13.799991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96574 ] 00:22:14.245 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:14.245 Zero copy mechanism will not be used. 00:22:14.508 [2024-07-15 02:24:13.937872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.508 [2024-07-15 02:24:14.015910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.440 02:24:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:15.440 02:24:14 -- common/autotest_common.sh@852 -- # return 0 00:22:15.440 02:24:14 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:15.440 02:24:14 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:15.440 02:24:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:15.697 02:24:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.697 02:24:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.955 nvme0n1 00:22:15.955 02:24:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:15.955 02:24:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:15.955 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:15.955 Zero copy mechanism will not be used. 00:22:15.955 Running I/O for 2 seconds... 00:22:18.485 00:22:18.485 Latency(us) 00:22:18.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.485 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:18.485 nvme0n1 : 2.00 7821.68 977.71 0.00 0.00 2041.03 1720.32 4379.00 00:22:18.485 =================================================================================================================== 00:22:18.485 Total : 7821.68 977.71 0.00 0.00 2041.03 1720.32 4379.00 00:22:18.485 0 00:22:18.485 02:24:17 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:18.485 02:24:17 -- host/digest.sh@92 -- # get_accel_stats 00:22:18.485 02:24:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:18.485 02:24:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:18.485 | select(.opcode=="crc32c") 00:22:18.485 | "\(.module_name) \(.executed)"' 00:22:18.485 02:24:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:18.485 02:24:17 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:18.485 02:24:17 -- host/digest.sh@93 -- # exp_module=software 00:22:18.485 02:24:17 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:18.485 02:24:17 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:18.485 02:24:17 -- host/digest.sh@97 -- # killprocess 96574 00:22:18.485 02:24:17 -- common/autotest_common.sh@926 -- # '[' -z 96574 ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@930 -- # kill -0 96574 00:22:18.485 02:24:17 -- common/autotest_common.sh@931 -- # uname 00:22:18.485 02:24:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96574 00:22:18.485 02:24:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:18.485 killing process with pid 96574 00:22:18.485 Received shutdown signal, test time was about 2.000000 seconds 00:22:18.485 00:22:18.485 Latency(us) 00:22:18.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.485 =================================================================================================================== 00:22:18.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.485 02:24:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96574' 00:22:18.485 02:24:17 -- common/autotest_common.sh@945 -- # kill 96574 00:22:18.485 02:24:17 -- common/autotest_common.sh@950 -- # wait 96574 00:22:18.485 02:24:17 -- host/digest.sh@126 -- # killprocess 96257 00:22:18.485 02:24:17 -- common/autotest_common.sh@926 -- # '[' -z 96257 ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@930 -- # kill -0 96257 00:22:18.485 02:24:17 -- common/autotest_common.sh@931 -- # uname 00:22:18.485 02:24:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96257 00:22:18.485 killing process with pid 96257 00:22:18.485 02:24:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:18.485 02:24:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:18.485 02:24:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96257' 00:22:18.485 02:24:17 -- common/autotest_common.sh@945 -- # kill 96257 00:22:18.485 02:24:17 -- common/autotest_common.sh@950 -- # wait 96257 00:22:18.743 ************************************ 00:22:18.743 END TEST nvmf_digest_clean 00:22:18.743 ************************************ 00:22:18.743 00:22:18.743 real 0m18.363s 00:22:18.743 user 0m34.760s 00:22:18.743 sys 0m4.645s 00:22:18.743 02:24:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.743 02:24:18 -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 02:24:18 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:18.743 02:24:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:18.743 02:24:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:18.743 02:24:18 -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 ************************************ 00:22:18.743 START TEST nvmf_digest_error 00:22:18.743 ************************************ 00:22:18.743 02:24:18 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:18.743 02:24:18 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:18.743 02:24:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:18.743 02:24:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:18.743 02:24:18 -- common/autotest_common.sh@10 -- # set +x 00:22:18.743 02:24:18 -- nvmf/common.sh@469 -- # nvmfpid=96687 00:22:18.743 02:24:18 -- nvmf/common.sh@470 -- # waitforlisten 96687 00:22:18.743 02:24:18 -- common/autotest_common.sh@819 -- # '[' -z 96687 ']' 00:22:18.743 02:24:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:18.743 02:24:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.743 02:24:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:18.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.743 02:24:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.743 02:24:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:18.743 02:24:18 -- common/autotest_common.sh@10 -- # set +x 00:22:19.000 [2024-07-15 02:24:18.305736] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:19.000 [2024-07-15 02:24:18.306836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.000 [2024-07-15 02:24:18.447474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.000 [2024-07-15 02:24:18.538069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.000 [2024-07-15 02:24:18.538285] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.000 [2024-07-15 02:24:18.538298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.000 [2024-07-15 02:24:18.538307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.000 [2024-07-15 02:24:18.538336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.934 02:24:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:19.934 02:24:19 -- common/autotest_common.sh@852 -- # return 0 00:22:19.934 02:24:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:19.934 02:24:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:19.934 02:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.934 02:24:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.934 02:24:19 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:19.934 02:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.934 02:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.934 [2024-07-15 02:24:19.330906] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:19.934 02:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.934 02:24:19 -- host/digest.sh@104 -- # common_target_config 00:22:19.934 02:24:19 -- host/digest.sh@43 -- # rpc_cmd 00:22:19.934 02:24:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.934 02:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:19.934 null0 00:22:19.934 [2024-07-15 02:24:19.440353] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.934 [2024-07-15 02:24:19.464488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.934 02:24:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.934 02:24:19 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:19.934 02:24:19 -- host/digest.sh@54 -- # local rw bs qd 00:22:19.934 02:24:19 -- host/digest.sh@56 -- # rw=randread 00:22:19.934 02:24:19 -- host/digest.sh@56 -- # bs=4096 00:22:19.934 02:24:19 -- host/digest.sh@56 -- # qd=128 00:22:19.934 02:24:19 -- host/digest.sh@58 -- # bperfpid=96731 00:22:19.934 02:24:19 -- host/digest.sh@60 -- # waitforlisten 96731 /var/tmp/bperf.sock 00:22:19.934 02:24:19 -- common/autotest_common.sh@819 -- # '[' -z 96731 ']' 00:22:19.934 02:24:19 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:19.934 02:24:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.934 02:24:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.934 02:24:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.934 02:24:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.934 02:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:20.192 [2024-07-15 02:24:19.524640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:20.192 [2024-07-15 02:24:19.524764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96731 ] 00:22:20.192 [2024-07-15 02:24:19.663584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.449 [2024-07-15 02:24:19.753399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.015 02:24:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:21.015 02:24:20 -- common/autotest_common.sh@852 -- # return 0 00:22:21.015 02:24:20 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:21.015 02:24:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:21.273 02:24:20 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:21.274 02:24:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.274 02:24:20 -- common/autotest_common.sh@10 -- # set +x 00:22:21.274 02:24:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.274 02:24:20 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.274 02:24:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.532 nvme0n1 00:22:21.532 02:24:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:21.532 02:24:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.532 02:24:21 -- common/autotest_common.sh@10 -- # set +x 00:22:21.532 02:24:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.532 02:24:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:21.532 02:24:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.790 Running I/O for 2 seconds... 00:22:21.790 [2024-07-15 02:24:21.188297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.790 [2024-07-15 02:24:21.188371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.790 [2024-07-15 02:24:21.188386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.198516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.198573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.198586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.213317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.213388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.225432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.225490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.225503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.239817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.239873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.239886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.252765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.252820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.252833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.266164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.266221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.266234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.277460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.277514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.277527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.287420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.287476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.287489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.301348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.301403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.301417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.315031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.315068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.315081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.328502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.328559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.328572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:21.791 [2024-07-15 02:24:21.341472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:21.791 [2024-07-15 02:24:21.341527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.791 [2024-07-15 02:24:21.341540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.049 [2024-07-15 02:24:21.354239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.049 [2024-07-15 02:24:21.354297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.049 [2024-07-15 02:24:21.354311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.049 [2024-07-15 02:24:21.367734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.049 [2024-07-15 02:24:21.367797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.049 [2024-07-15 02:24:21.367811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.049 [2024-07-15 02:24:21.379714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.049 [2024-07-15 02:24:21.379772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.049 [2024-07-15 02:24:21.379786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.049 [2024-07-15 02:24:21.392412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.392486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.405699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.405756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.417281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.417341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.417356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.429765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.429860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.429877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.443136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.443194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.443208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.458633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.458676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.458691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.472002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.472046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.472060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.483273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.483345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.483360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.496676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.496728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.508957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.509008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.521620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.521663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.521677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.538042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.538091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.538106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.548251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.548307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.548322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.563017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.563081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.563095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.577980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.578025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.578039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.590550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.590633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.590650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.050 [2024-07-15 02:24:21.601930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.050 [2024-07-15 02:24:21.601976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.050 [2024-07-15 02:24:21.601991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.614106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.614181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.614195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.626455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.626515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.626530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.640139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.640200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.640214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.651856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.651912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.651926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.663355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.663411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.663425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.679317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.679375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.679390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.689174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.689231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.689246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.704039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.704083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.704097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.716110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.716169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.716183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.733956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.734000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.734014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.745251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.745307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.745321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.759299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.759368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.774076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.774146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.786821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.786876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.786889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.797496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.797551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.797564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.808833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.808887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.808901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.820862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.820917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.820931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.835237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.835293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.847028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.847083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.309 [2024-07-15 02:24:21.857040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.309 [2024-07-15 02:24:21.857093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.309 [2024-07-15 02:24:21.857106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.870558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.870621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.870635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.884363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.884421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.884433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.898045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.898103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.898116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.911523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.911592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.924622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.924676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.924689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.937570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.568 [2024-07-15 02:24:21.937634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.568 [2024-07-15 02:24:21.937648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.568 [2024-07-15 02:24:21.952087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:21.952141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:21.952155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:21.964281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:21.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:21.964347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:21.975968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:21.976006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:21.976019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:21.986097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:21.986166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:21.986179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:21.997780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:21.997842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:21.997855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.008234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.008289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.008302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.019472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.019528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.019540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.030923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.030977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.030990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.041084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.041153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.052367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.052420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.064227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.064282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.064295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.075322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.075375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.089548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.089610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.089625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.102757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.102811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.102825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.569 [2024-07-15 02:24:22.116978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.569 [2024-07-15 02:24:22.117019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.569 [2024-07-15 02:24:22.117032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.129427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.129480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.129494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.143805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.143863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.143876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.157290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.157346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.157360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.169636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.169672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.179336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.179391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.179404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.193007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.193063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.193075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.205553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.205617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.205632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.219107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.219180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.219193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.233967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.234024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.248193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.248250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.248263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.262216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.262272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.262286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.276928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.276983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.276998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.290782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.290838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.290852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.305423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.305478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.305491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.319778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.319834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.319847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.329314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.329371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.329385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.343167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.343224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.356565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.356655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.356670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.369742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.369796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.369834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.828 [2024-07-15 02:24:22.381338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:22.828 [2024-07-15 02:24:22.381393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.828 [2024-07-15 02:24:22.381406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.394325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.394381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.404709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.404763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.404776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.415811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.415851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.415865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.427033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.427090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.427103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.439332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.439390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.451228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.451286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.451300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.463311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.463370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.463383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.475812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.475870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.475884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.490997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.491053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.491067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.505318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.505377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.505391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.519946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.520003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.520016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.534850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.534894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.534910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.549654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.549694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.549709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.564025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.564081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.564095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.575467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.575525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.589268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.589326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.589339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.604266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.604323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.604336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.619159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.619217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.619232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.631868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.631926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.631940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.087 [2024-07-15 02:24:22.643181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.087 [2024-07-15 02:24:22.643237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.087 [2024-07-15 02:24:22.643251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.657720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.657775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.672204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.672260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.672273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.687296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.687356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.687371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.701825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.701865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.701879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.715767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.715840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.715854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.728552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.728618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.728633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.742571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.742628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.742644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.757686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.757721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.757734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.772499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.772558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.772572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.783271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.783330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.783343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.796286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.796329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.796343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.809312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.809355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.809369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.821591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.821657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.821672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.836366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.836425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.836440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.850792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.850854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.850868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.865392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.865450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.865465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.879593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.879657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.879671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.346 [2024-07-15 02:24:22.893202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.346 [2024-07-15 02:24:22.893259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.346 [2024-07-15 02:24:22.893273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.903581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.903652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.903666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.918438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.918513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.918527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.933213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.933270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.933283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.944896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.944937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.944951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.957841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.957881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.957894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.967958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.968014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.968027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.982547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.982615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.982630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:22.995606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:22.995673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:22.995687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.008946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.009005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.009019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.023528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.023586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.023610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.036885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.036955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.048435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.048492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.048505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.059530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.059587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.059612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.072137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.072193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.072207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.083938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.083978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.083991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.095962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.096001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.096014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.106884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.106939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.106953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.121085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.121140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.121170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.135206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.135263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.135276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.605 [2024-07-15 02:24:23.149421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.605 [2024-07-15 02:24:23.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.605 [2024-07-15 02:24:23.149492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.873 [2024-07-15 02:24:23.163506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1d760) 00:22:23.873 [2024-07-15 02:24:23.163563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.873 [2024-07-15 02:24:23.163576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:23.873 00:22:23.873 Latency(us) 00:22:23.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.873 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:23.873 nvme0n1 : 2.01 19538.40 76.32 0.00 0.00 6545.71 2740.60 20614.05 00:22:23.873 =================================================================================================================== 00:22:23.873 Total : 19538.40 76.32 0.00 0.00 6545.71 2740.60 20614.05 00:22:23.873 0 00:22:23.873 02:24:23 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:23.873 02:24:23 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:23.873 | .driver_specific 00:22:23.873 | .nvme_error 00:22:23.873 | .status_code 00:22:23.873 | .command_transient_transport_error' 00:22:23.873 02:24:23 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:23.873 02:24:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:24.135 02:24:23 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:22:24.135 02:24:23 -- host/digest.sh@73 -- # killprocess 96731 00:22:24.135 02:24:23 -- common/autotest_common.sh@926 -- # '[' -z 96731 ']' 00:22:24.135 02:24:23 -- common/autotest_common.sh@930 -- # kill -0 96731 00:22:24.135 02:24:23 -- common/autotest_common.sh@931 -- # uname 00:22:24.135 02:24:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.135 02:24:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96731 00:22:24.135 killing process with pid 96731 00:22:24.135 Received shutdown signal, test time was about 2.000000 seconds 00:22:24.135 00:22:24.135 Latency(us) 00:22:24.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.135 =================================================================================================================== 00:22:24.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.135 02:24:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:24.135 02:24:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:24.135 02:24:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96731' 00:22:24.135 02:24:23 -- common/autotest_common.sh@945 -- # kill 96731 00:22:24.135 02:24:23 -- common/autotest_common.sh@950 -- # wait 96731 00:22:24.135 02:24:23 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:24.135 02:24:23 -- host/digest.sh@54 -- # local rw bs qd 00:22:24.135 02:24:23 -- host/digest.sh@56 -- # rw=randread 00:22:24.135 02:24:23 -- host/digest.sh@56 -- # bs=131072 00:22:24.135 02:24:23 -- host/digest.sh@56 -- # qd=16 00:22:24.135 02:24:23 -- host/digest.sh@58 -- # bperfpid=96822 00:22:24.135 02:24:23 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:24.135 02:24:23 -- host/digest.sh@60 -- # waitforlisten 96822 /var/tmp/bperf.sock 00:22:24.135 02:24:23 -- common/autotest_common.sh@819 -- # '[' -z 96822 ']' 00:22:24.135 02:24:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.135 02:24:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.135 02:24:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.135 02:24:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.392 02:24:23 -- common/autotest_common.sh@10 -- # set +x 00:22:24.392 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:24.392 Zero copy mechanism will not be used. 00:22:24.392 [2024-07-15 02:24:23.740946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:24.392 [2024-07-15 02:24:23.741060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96822 ] 00:22:24.392 [2024-07-15 02:24:23.879754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.652 [2024-07-15 02:24:23.966786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.219 02:24:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.219 02:24:24 -- common/autotest_common.sh@852 -- # return 0 00:22:25.219 02:24:24 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:25.219 02:24:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:25.477 02:24:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:25.477 02:24:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.477 02:24:24 -- common/autotest_common.sh@10 -- # set +x 00:22:25.477 02:24:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.477 02:24:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.477 02:24:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.735 nvme0n1 00:22:25.994 02:24:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:25.994 02:24:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.994 02:24:25 -- common/autotest_common.sh@10 -- # set +x 00:22:25.994 02:24:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.994 02:24:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:25.994 02:24:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.994 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:25.994 Zero copy mechanism will not be used. 00:22:25.994 Running I/O for 2 seconds... 00:22:25.994 [2024-07-15 02:24:25.408593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.994 [2024-07-15 02:24:25.408677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.994 [2024-07-15 02:24:25.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.994 [2024-07-15 02:24:25.412526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.994 [2024-07-15 02:24:25.412581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.994 [2024-07-15 02:24:25.412595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.994 [2024-07-15 02:24:25.416805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.994 [2024-07-15 02:24:25.416860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.994 [2024-07-15 02:24:25.416873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.994 [2024-07-15 02:24:25.420422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.994 [2024-07-15 02:24:25.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.994 [2024-07-15 02:24:25.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.994 [2024-07-15 02:24:25.423857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.423945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.423958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.427416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.427470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.427483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.430762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.430802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.430815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.434295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.434351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.434364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.438019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.438059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.438072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.441549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.441609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.445301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.445355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.445368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.448651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.448705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.448719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.452846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.452898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.452910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.457049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.457104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.457118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.460990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.461045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.461058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.464395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.464451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.464464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.468270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.468311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.468325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.472300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.472355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.472368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.475412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.475468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.475481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.479628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.479675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.479688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.483788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.483828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.483841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.487230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.487286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.491034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.491089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.491102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.494936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.494992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.495005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.498564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.498614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.498628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.501739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.501828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.505629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.505679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.505692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.509650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.509700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.509714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.995 [2024-07-15 02:24:25.512673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.995 [2024-07-15 02:24:25.512707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.995 [2024-07-15 02:24:25.512719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.517078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.517132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.517145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.520789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.520827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.520840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.524699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.524738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.524750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.528630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.528675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.532800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.532838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.532850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.536442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.536493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.536506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.540004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.540056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.540069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.543453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.543506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.543518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.546536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.546591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.546638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:25.996 [2024-07-15 02:24:25.549927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:25.996 [2024-07-15 02:24:25.549967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.996 [2024-07-15 02:24:25.549980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.553184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.553235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.557136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.557192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.557206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.561588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.561673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.561689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.565158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.565212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.565225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.568608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.568673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.568687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.572624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.572687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.572700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.576780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.576835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.576847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.580700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.580753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.580765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.584292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.584344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.584357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.588036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.588089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.588102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.591883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.591937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.595112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.595166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.595178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.599120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.599176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.599189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.603051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.603107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.603119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.607152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.607190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.607219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.611019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.611073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.611086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.614393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.614464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.614477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.618556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.256 [2024-07-15 02:24:25.618620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.256 [2024-07-15 02:24:25.618634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.256 [2024-07-15 02:24:25.622174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.622229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.622242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.626161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.626216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.626229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.629739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.629776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.629789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.633455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.633506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.633519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.637000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.637069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.637097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.640666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.640718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.640731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.644565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.644627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.644640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.648100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.648155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.648167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.651809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.651863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.651876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.655561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.655612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.655626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.659642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.659678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.659690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.662839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.662893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.662905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.666562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.666623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.666637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.670205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.670260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.670272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.673431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.673483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.677413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.677465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.677477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.680899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.680938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.680950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.684874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.684912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.684925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.688016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.688069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.688082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.691564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.691627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.691641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.694997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.695064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.698762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.698816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.698829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.702155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.702208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.702221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.705443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.705493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.705506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.708783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.257 [2024-07-15 02:24:25.708834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.257 [2024-07-15 02:24:25.708846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.257 [2024-07-15 02:24:25.712432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.712485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.712498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.715939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.715977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.715989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.719294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.719347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.719360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.723218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.723271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.723284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.726987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.727055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.727068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.730290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.730343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.730355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.734156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.734197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.734211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.737677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.737711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.737724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.741473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.741527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.741541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.745777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.745846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.745860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.749388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.749424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.749437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.753325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.753362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.753376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.757622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.757687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.757701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.760882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.760935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.760949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.764248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.764313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.768169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.768237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.768250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.771895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.771934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.771948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.775529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.775583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.775607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.779305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.779361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.779374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.783366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.783420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.783433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.787411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.787465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.787478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.790681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.790731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.794592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.794641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.794655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.799126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.799181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.799194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.803191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.803245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.258 [2024-07-15 02:24:25.807415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.258 [2024-07-15 02:24:25.807471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.258 [2024-07-15 02:24:25.807484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.259 [2024-07-15 02:24:25.811522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.259 [2024-07-15 02:24:25.811576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.259 [2024-07-15 02:24:25.811603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.815365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.815419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.815432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.819250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.819302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.819315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.822908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.822961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.822974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.826694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.826743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.830469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.830507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.833514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.518 [2024-07-15 02:24:25.833564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.518 [2024-07-15 02:24:25.833577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.518 [2024-07-15 02:24:25.836972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.837023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.837037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.840403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.840474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.840487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.844805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.844859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.844872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.848200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.848281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.851260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.851329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.851342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.854552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.854618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.858762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.858815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.858828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.862054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.862093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.862121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.865864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.865900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.869352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.869404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.873530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.873609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.876949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.876986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.877000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.880640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.880687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.880699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.884586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.884651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.884664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.887690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.887722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.887734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.891184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.891239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.891252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.894953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.899290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.899345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.899358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.902531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.902584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.902631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.906130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.906186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.906199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.909401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.909453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.909465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.912797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.912849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.912861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.916681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.916733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.916746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.921077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.921131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.921145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.925343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.925398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.925426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.928748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.928802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.928815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.932959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.933014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.933027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.936802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.936857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.936885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.940620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.519 [2024-07-15 02:24:25.940668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.519 [2024-07-15 02:24:25.940681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.519 [2024-07-15 02:24:25.944777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.944830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.944844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.948972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.949025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.949038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.953065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.953119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.953132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.957256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.957311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.957324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.961276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.961331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.961344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.965364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.965418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.965431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.969229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.969281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.969294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.973878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.973917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.973930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.977959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.977999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.978012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.981871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.981908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.981921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.985840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.985876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.985889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.989496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.989561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.993359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.993412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.993425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:25.996583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:25.996646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:25.996660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.000215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.000269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.000282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.004415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.004470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.004483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.008567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.008647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.008662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.012574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.012624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.012639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.015598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.015662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.015675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.018628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.018693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.018707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.022061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.022100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.022113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.025885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.025924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.025937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.030277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.030331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.030344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.034532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.034586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.034610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.038509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.038564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.038593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.042532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.042586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.042613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.046254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.046314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.046343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.050510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.050563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.050576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.054169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.054213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.520 [2024-07-15 02:24:26.054227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.520 [2024-07-15 02:24:26.057928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.520 [2024-07-15 02:24:26.057984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.521 [2024-07-15 02:24:26.057997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.521 [2024-07-15 02:24:26.061960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.521 [2024-07-15 02:24:26.062016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.521 [2024-07-15 02:24:26.062029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.521 [2024-07-15 02:24:26.065481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.521 [2024-07-15 02:24:26.065532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.521 [2024-07-15 02:24:26.065562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.521 [2024-07-15 02:24:26.069399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.521 [2024-07-15 02:24:26.069452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.521 [2024-07-15 02:24:26.069496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.521 [2024-07-15 02:24:26.073270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.521 [2024-07-15 02:24:26.073327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.521 [2024-07-15 02:24:26.073340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.077000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.077038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.077050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.080532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.080587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.080627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.084276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.084328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.084357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.088098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.088153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.088183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.091907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.091961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.091990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.095875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.095929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.095958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.099349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.099403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.102905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.102960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.102972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.106132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.106204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.106233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.110521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.110562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.110575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.114279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.114361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.118303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.118360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.118390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.122021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.122060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.122074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.125562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.125640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.125655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.129068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.129121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.129151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.132535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.132627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.136530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.136586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.136626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.140728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.140783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.140797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.144527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.144582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.144621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.148765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.148804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.148817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.152592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.152682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.152694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.156834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.156891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.156904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.161398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.161453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.161483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.165421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.165474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.165504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.169509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.169561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.172313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.172366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.172396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.176170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.176222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.176250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.179795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.179851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.780 [2024-07-15 02:24:26.179865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.780 [2024-07-15 02:24:26.183882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.780 [2024-07-15 02:24:26.183921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.183934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.187367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.187405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.187418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.191643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.191681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.191695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.196057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.196097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.196127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.199586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.199635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.199649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.203330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.203368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.203381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.206849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.206889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.206902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.210715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.210755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.210769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.214318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.214357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.214371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.217999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.218038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.218051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.221378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.221414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.221444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.225051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.225087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.225116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.227622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.227666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.227679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.231747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.231783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.231812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.235338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.235375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.235403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.239494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.239532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.239560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.243709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.243746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.243776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.247082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.247120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.247149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.251049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.251090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.251120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.254958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.259049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.259087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.259117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.262277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.262313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.262342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.265296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.265331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.265360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.269351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.269390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.269420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.272762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.272799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.272828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.276657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.276694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.276707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.280611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.280657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.280670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.283980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.284016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.284029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.288136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.288189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.781 [2024-07-15 02:24:26.288202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.781 [2024-07-15 02:24:26.292745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.781 [2024-07-15 02:24:26.292783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.292796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.296116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.296154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.296167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.299846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.299885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.299898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.302819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.302857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.302869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.306753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.306790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.306803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.310047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.310083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.310095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.313765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.313799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.313837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.317124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.317159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.317171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.321240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.321277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.321289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.324334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.324371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.324383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.328341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.328378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.328391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.331566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.331611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.331624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.782 [2024-07-15 02:24:26.335044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:26.782 [2024-07-15 02:24:26.335080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.782 [2024-07-15 02:24:26.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.338337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.338373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.338384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.342104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.342156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.342183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.346503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.346540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.349358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.349392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.349404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.352903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.352938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.352950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.356993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.357030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.357042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.360569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.360618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.360632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.364229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.364267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.364281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.368246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.368285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.368297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.372428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.372467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.372481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.375946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.375983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.375996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.379981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.380034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.380047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.384130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.384180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.387863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.387902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.387915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.390878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.390917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.395248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.395287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.395299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.399359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.399397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.402867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.402906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.406104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.406144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.406173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.409776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.409843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.409857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.413158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.413194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.413206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.416623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.416672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.420682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.420720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.420748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.423967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.424005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.424017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.427461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.427497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.427508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.431163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.431199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.431211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.434626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.434669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-07-15 02:24:26.434682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.042 [2024-07-15 02:24:26.438094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.042 [2024-07-15 02:24:26.438131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.438159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.441454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.441504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.441516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.444772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.444825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.444837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.448945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.449027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.452769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.452809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.452822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.456488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.456527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.456539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.460361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.460399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.460411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.464676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.464712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.464723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.468553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.468617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.468648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.472129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.472169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.472182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.475210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.475249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.475262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.479174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.479230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.479243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.482424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.482464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.482477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.485780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.485826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.485840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.489821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.489858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.489872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.493335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.493388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.497490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.497544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.497557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.501303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.501355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.504648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.504699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.504711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.507595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.507661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.507674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.511475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.511529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.511541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.515435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.515489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.515502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.518637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.518687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.522373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.522426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.522455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.526200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.526254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.526266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.529699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.529751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.529763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.533954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.534010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.534023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.537468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.537519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.541277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.541329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.541342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.043 [2024-07-15 02:24:26.544840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.043 [2024-07-15 02:24:26.544893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-07-15 02:24:26.544906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.548234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.548288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.548300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.551992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.552047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.552060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.556232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.556287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.559892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.559946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.559974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.563860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.563926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.567041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.567095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.567107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.570802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.570866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.574299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.574349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.577670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.577704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.577716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.580923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.580975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.581002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.583868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.583922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.583934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.587963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.588033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.588046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.591869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.591935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.044 [2024-07-15 02:24:26.595587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.044 [2024-07-15 02:24:26.595649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-07-15 02:24:26.595663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.599390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.599444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.599456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.602870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.602923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.602935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.606612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.606662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.606675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.609874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.609912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.609925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.612808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.612858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.616786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.616842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.616856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.620732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.620766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.620778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.624623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.624682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.628096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.628127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.632063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.632127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.635718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.635755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.635769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.639850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.639889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.639902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.642902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.642939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.642953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.646943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.647011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.647023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.650915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.650971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.651015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.654690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.654742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.654755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.658254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.658308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.658321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.662000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.662069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.665413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.665464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.665477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.668683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.668736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.668749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.671923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.671976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.675166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.675219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.675231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.678141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.678195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.678207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.681425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.681495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.681508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.685645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.685697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.685710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.689019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.689071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.689084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.692233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.692288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.692301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.695862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.695917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.695930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.698852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.698906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.698919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.702123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.305 [2024-07-15 02:24:26.702177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.305 [2024-07-15 02:24:26.702189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.305 [2024-07-15 02:24:26.706227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.706281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.706293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.709518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.709567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.709579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.713232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.713295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.716660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.716712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.716724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.720340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.720392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.720404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.724176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.724229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.724241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.727257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.727310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.727322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.730972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.731025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.731038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.734251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.734304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.734317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.738100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.738187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.738200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.742262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.742317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.742330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.746000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.746041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.746055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.749316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.749367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.749380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.752958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.753012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.753024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.755892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.755947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.755961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.759734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.759786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.759798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.763501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.763556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.763568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.767188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.767241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.767253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.770962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.771017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.771029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.774400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.774453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.774466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.777746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.777797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.777817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.781650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.781700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.781713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.785407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.785457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.785469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.788475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.788528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.788540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.792176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.792231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.792243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.796058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.796111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.796124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.799273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.799326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.799339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.802525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.802580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.802593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.805670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.805721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.809237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.809288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.809300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.812836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.812888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.812901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.815643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.815706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.819289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.819342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.819354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.822965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.823017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.823030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.306 [2024-07-15 02:24:26.826715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.306 [2024-07-15 02:24:26.826766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.306 [2024-07-15 02:24:26.826778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.830179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.830233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.830245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.834079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.834132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.834160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.837176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.837226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.837238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.840820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.840871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.840884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.845039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.845091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.845103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.848873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.848926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.848939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.851716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.851769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.851782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.855459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.855513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.855526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.307 [2024-07-15 02:24:26.858517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.307 [2024-07-15 02:24:26.858571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.307 [2024-07-15 02:24:26.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.861695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.861745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.861757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.865470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.865521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.865533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.868957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.869008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.872613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.872688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.872702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.876139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.876191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.880183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.880237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.880249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.883554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.883639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.887057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.887110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.887122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.890738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.890815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.890828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.894422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.894475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.894487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.897854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.897890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.897903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.900919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.900970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.900982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.904325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.904377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.904390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.907819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.907879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.907893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.911478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.911544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.914896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.914949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.918772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.567 [2024-07-15 02:24:26.918826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.567 [2024-07-15 02:24:26.918839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.567 [2024-07-15 02:24:26.922150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.922203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.922215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.925403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.925452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.925464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.928765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.928818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.928831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.932015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.932068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.935743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.935797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.935809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.939266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.939321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.942924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.942977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.942989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.946603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.946667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.946680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.950304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.950356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.950368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.953578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.953641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.953653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.957257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.957308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.957320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.960300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.960354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.960366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.964208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.964262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.964274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.967101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.967154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.967167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.970724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.970777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.970805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.974346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.974398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.974410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.978151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.978204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.978217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.981536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.981588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.981612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.985347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.985400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.985412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.988748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.988801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.988813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.991739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.991791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.991803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.995679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.995732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.995745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:26.999690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:26.999743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:26.999755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.003305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.003371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.006998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.007052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.007065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.010350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.010403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.010416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.013898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.013960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.017588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.017681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.020934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.568 [2024-07-15 02:24:27.020986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.568 [2024-07-15 02:24:27.020999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.568 [2024-07-15 02:24:27.024584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.024647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.024660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.027938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.027992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.028004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.031426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.031481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.031494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.035640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.035693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.035705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.038822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.038876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.038889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.042442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.042496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.042509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.045971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.046023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.049457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.049509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.049522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.053150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.053201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.053213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.056503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.056556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.060383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.060436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.060449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.064405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.064459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.064472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.067932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.067987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.068000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.072048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.072101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.072113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.075235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.075289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.075301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.078142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.078196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.078208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.081676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.081726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.081739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.085125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.085176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.089059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.089113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.089126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.092420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.092472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.092484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.096247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.096312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.099451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.099505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.099517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.102795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.102865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.102878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.105961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.106013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.106026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.109616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.109664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.109676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.113119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.113170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.113182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.116688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.116738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.116750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.569 [2024-07-15 02:24:27.120155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.569 [2024-07-15 02:24:27.120206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.569 [2024-07-15 02:24:27.120219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.123772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.123825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.123838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.126708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.126762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.129893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.129959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.133935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.134001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.136946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.136997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.137010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.140246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.140300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.140312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.143816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.143870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.143883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.147271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.147325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.147338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.151085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.151138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.151151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.154373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.154427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.154456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.158022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.158059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.158072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.161699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.161750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.161763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.165456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.165507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.165519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.169070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.169138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.169151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.172521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.175763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.175816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.175828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.179213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.179265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.179277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.181865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.181916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.181929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.185778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.185855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.189320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.189371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.189383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.193115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.193167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.193180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.196853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.196905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.196917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.200419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.200472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.200485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.204350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.830 [2024-07-15 02:24:27.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.830 [2024-07-15 02:24:27.204400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.830 [2024-07-15 02:24:27.207331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.207385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.207398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.211242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.211293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.211306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.214662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.214710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.214724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.218554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.218594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.218619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.221789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.221850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.221864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.225403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.225455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.225468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.228988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.229043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.229055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.232736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.232788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.232800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.236351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.236404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.236417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.239907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.239960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.239972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.243605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.243666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.243678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.247383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.247438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.247450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.251430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.251483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.251495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.254606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.254656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.254670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.257442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.257492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.257504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.260829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.260881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.260893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.264753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.264817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.268207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.268261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.268273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.271520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.271573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.271586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.275625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.275689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.275701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.279312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.279366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.279378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.282613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.282659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.286475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.286515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.286528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.290193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.290247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.290259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.293445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.293497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.293509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.297251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.297306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.297319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.301009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.301063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.301075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.304550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.304615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.304629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.308345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.831 [2024-07-15 02:24:27.308398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.831 [2024-07-15 02:24:27.308410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.831 [2024-07-15 02:24:27.312078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.312134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.312147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.315877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.315931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.315944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.319425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.319479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.319492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.323573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.323622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.323635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.327207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.327262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.327275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.330043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.330084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.330097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.334089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.334159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.334172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.337395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.337447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.337459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.340776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.340829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.340841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.344545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.344608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.344622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.347902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.347955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.347968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.351240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.351293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.351305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.354969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.355035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.358468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.358523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.358536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.361925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.361980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.361992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.364750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.364801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.364830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.368544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.368582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.368594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.372185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.372239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.376052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.376104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.376116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.379639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.379703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.379716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:27.832 [2024-07-15 02:24:27.383423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:27.832 [2024-07-15 02:24:27.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.832 [2024-07-15 02:24:27.383488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:28.091 [2024-07-15 02:24:27.387107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:28.091 [2024-07-15 02:24:27.387160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.091 [2024-07-15 02:24:27.387173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:28.091 [2024-07-15 02:24:27.390131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:28.091 [2024-07-15 02:24:27.390200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.091 [2024-07-15 02:24:27.390212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:28.091 [2024-07-15 02:24:27.393934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:28.091 [2024-07-15 02:24:27.393973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.091 [2024-07-15 02:24:27.393986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:28.091 [2024-07-15 02:24:27.397148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:28.091 [2024-07-15 02:24:27.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.091 [2024-07-15 02:24:27.397212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:28.091 [2024-07-15 02:24:27.400949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2215860) 00:22:28.091 [2024-07-15 02:24:27.401000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.091 [2024-07-15 02:24:27.401013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:28.091 00:22:28.091 Latency(us) 00:22:28.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:28.091 nvme0n1 : 2.00 8463.60 1057.95 0.00 0.00 1887.15 621.85 5510.98 00:22:28.091 =================================================================================================================== 00:22:28.091 Total : 8463.60 1057.95 0.00 0.00 1887.15 621.85 5510.98 00:22:28.091 0 00:22:28.091 02:24:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:28.091 02:24:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:28.091 | .driver_specific 00:22:28.091 | .nvme_error 00:22:28.091 | .status_code 00:22:28.091 | .command_transient_transport_error' 00:22:28.091 02:24:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:28.091 02:24:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:28.349 02:24:27 -- host/digest.sh@71 -- # (( 546 > 0 )) 00:22:28.349 02:24:27 -- host/digest.sh@73 -- # killprocess 96822 00:22:28.349 02:24:27 -- common/autotest_common.sh@926 -- # '[' -z 96822 ']' 00:22:28.349 02:24:27 -- common/autotest_common.sh@930 -- # kill -0 96822 00:22:28.349 02:24:27 -- common/autotest_common.sh@931 -- # uname 00:22:28.349 02:24:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.349 02:24:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96822 00:22:28.349 killing process with pid 96822 00:22:28.349 Received shutdown signal, test time was about 2.000000 seconds 00:22:28.349 00:22:28.349 Latency(us) 00:22:28.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.349 =================================================================================================================== 00:22:28.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.349 02:24:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:28.350 02:24:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:28.350 02:24:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96822' 00:22:28.350 02:24:27 -- common/autotest_common.sh@945 -- # kill 96822 00:22:28.350 02:24:27 -- common/autotest_common.sh@950 -- # wait 96822 00:22:28.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.607 02:24:27 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:28.607 02:24:27 -- host/digest.sh@54 -- # local rw bs qd 00:22:28.607 02:24:27 -- host/digest.sh@56 -- # rw=randwrite 00:22:28.607 02:24:27 -- host/digest.sh@56 -- # bs=4096 00:22:28.607 02:24:27 -- host/digest.sh@56 -- # qd=128 00:22:28.607 02:24:27 -- host/digest.sh@58 -- # bperfpid=96913 00:22:28.607 02:24:27 -- host/digest.sh@60 -- # waitforlisten 96913 /var/tmp/bperf.sock 00:22:28.607 02:24:27 -- common/autotest_common.sh@819 -- # '[' -z 96913 ']' 00:22:28.607 02:24:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:28.607 02:24:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.607 02:24:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.607 02:24:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.607 02:24:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.607 02:24:27 -- common/autotest_common.sh@10 -- # set +x 00:22:28.607 [2024-07-15 02:24:27.979406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:28.607 [2024-07-15 02:24:27.979514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96913 ] 00:22:28.607 [2024-07-15 02:24:28.117512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.865 [2024-07-15 02:24:28.189888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.431 02:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.431 02:24:28 -- common/autotest_common.sh@852 -- # return 0 00:22:29.431 02:24:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:29.431 02:24:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:29.689 02:24:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:29.689 02:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.689 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:22:29.689 02:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.689 02:24:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.689 02:24:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.946 nvme0n1 00:22:29.946 02:24:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:29.946 02:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:29.946 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:22:29.946 02:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:29.946 02:24:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:29.946 02:24:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:30.204 Running I/O for 2 seconds... 00:22:30.204 [2024-07-15 02:24:29.624179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eea00 00:22:30.204 [2024-07-15 02:24:29.625348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.625402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.635568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ebb98 00:22:30.204 [2024-07-15 02:24:29.635874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.646796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ee5c8 00:22:30.204 [2024-07-15 02:24:29.647098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.647151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.657903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eaef0 00:22:30.204 [2024-07-15 02:24:29.658632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.658681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.669169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e49b0 00:22:30.204 [2024-07-15 02:24:29.669777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.669848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.682162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e2c28 00:22:30.204 [2024-07-15 02:24:29.683301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.683350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.692255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f96f8 00:22:30.204 [2024-07-15 02:24:29.693480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.703940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190efae0 00:22:30.204 [2024-07-15 02:24:29.704624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.704668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:30.204 [2024-07-15 02:24:29.717759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e4de8 00:22:30.204 [2024-07-15 02:24:29.719134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.204 [2024-07-15 02:24:29.719182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.205 [2024-07-15 02:24:29.726012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eaef0 00:22:30.205 [2024-07-15 02:24:29.726385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.205 [2024-07-15 02:24:29.726417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:30.205 [2024-07-15 02:24:29.738865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e4140 00:22:30.205 [2024-07-15 02:24:29.739745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.205 [2024-07-15 02:24:29.739795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:30.205 [2024-07-15 02:24:29.749239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f2510 00:22:30.205 [2024-07-15 02:24:29.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.205 [2024-07-15 02:24:29.750668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:30.205 [2024-07-15 02:24:29.760197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fb480 00:22:30.205 [2024-07-15 02:24:29.760806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.205 [2024-07-15 02:24:29.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.772417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fd208 00:22:30.464 [2024-07-15 02:24:29.773533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.773581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.780287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e0a68 00:22:30.464 [2024-07-15 02:24:29.780422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.780441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.792925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6cc8 00:22:30.464 [2024-07-15 02:24:29.794631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.794692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.803324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e27f0 00:22:30.464 [2024-07-15 02:24:29.804035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.804083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.813496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f0ff8 00:22:30.464 [2024-07-15 02:24:29.814554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.814618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.823656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fa7d8 00:22:30.464 [2024-07-15 02:24:29.824518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.824569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.836237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f5378 00:22:30.464 [2024-07-15 02:24:29.837122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.837171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.846711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190edd58 00:22:30.464 [2024-07-15 02:24:29.848003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.848054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.858355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6020 00:22:30.464 [2024-07-15 02:24:29.858945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.858975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.872088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ddc00 00:22:30.464 [2024-07-15 02:24:29.873334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.873382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.880281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f0788 00:22:30.464 [2024-07-15 02:24:29.880577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.880597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.893782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f3a28 00:22:30.464 [2024-07-15 02:24:29.894590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.903710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e7818 00:22:30.464 [2024-07-15 02:24:29.905060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.905111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.914866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ebfd0 00:22:30.464 [2024-07-15 02:24:29.915522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.464 [2024-07-15 02:24:29.915587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:30.464 [2024-07-15 02:24:29.926051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ea680 00:22:30.465 [2024-07-15 02:24:29.926793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.926843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.936448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fd208 00:22:30.465 [2024-07-15 02:24:29.937958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.938010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.946814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e2c28 00:22:30.465 [2024-07-15 02:24:29.947683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.947733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.959867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e9168 00:22:30.465 [2024-07-15 02:24:29.960765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.960813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.970304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fbcf0 00:22:30.465 [2024-07-15 02:24:29.971657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.971720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.981722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e8d30 00:22:30.465 [2024-07-15 02:24:29.982314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.982346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:29.993123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f1ca0 00:22:30.465 [2024-07-15 02:24:29.993759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:29.993794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:30.002827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f7970 00:22:30.465 [2024-07-15 02:24:30.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:30.002996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:30.465 [2024-07-15 02:24:30.016682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e1f80 00:22:30.465 [2024-07-15 02:24:30.017464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.465 [2024-07-15 02:24:30.017498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.028365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f2510 00:22:30.724 [2024-07-15 02:24:30.029215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.029265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.038245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f9f68 00:22:30.724 [2024-07-15 02:24:30.038622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.038662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.051584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6738 00:22:30.724 [2024-07-15 02:24:30.052650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.059860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e9e10 00:22:30.724 [2024-07-15 02:24:30.059951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.059971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.072981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190de8a8 00:22:30.724 [2024-07-15 02:24:30.073654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.073700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.084957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ee190 00:22:30.724 [2024-07-15 02:24:30.085742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.085776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.095689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f0788 00:22:30.724 [2024-07-15 02:24:30.096937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.096987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.107240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8e88 00:22:30.724 [2024-07-15 02:24:30.107726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.107757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.118688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e5ec8 00:22:30.724 [2024-07-15 02:24:30.119132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.119165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.129242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e1f80 00:22:30.724 [2024-07-15 02:24:30.130571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.130618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.143240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e27f0 00:22:30.724 [2024-07-15 02:24:30.144498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.144545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.151336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fda78 00:22:30.724 [2024-07-15 02:24:30.151703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.151735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.164700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e3060 00:22:30.724 [2024-07-15 02:24:30.165737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.165785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.172682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fa3a0 00:22:30.724 [2024-07-15 02:24:30.172768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.172788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.185109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6738 00:22:30.724 [2024-07-15 02:24:30.185696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.185728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.198417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ee5c8 00:22:30.724 [2024-07-15 02:24:30.199708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.199756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.206513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8618 00:22:30.724 [2024-07-15 02:24:30.206856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.206888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.219747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f2948 00:22:30.724 [2024-07-15 02:24:30.221548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.221607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.229557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6fa8 00:22:30.724 [2024-07-15 02:24:30.230956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.231004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.240400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f4f40 00:22:30.724 [2024-07-15 02:24:30.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.240807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.251238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fdeb0 00:22:30.724 [2024-07-15 02:24:30.251600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.724 [2024-07-15 02:24:30.251643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:30.724 [2024-07-15 02:24:30.261922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6890 00:22:30.724 [2024-07-15 02:24:30.262552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.725 [2024-07-15 02:24:30.262586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:30.725 [2024-07-15 02:24:30.272785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fe720 00:22:30.725 [2024-07-15 02:24:30.273129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.725 [2024-07-15 02:24:30.273162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.284922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eb328 00:22:30.984 [2024-07-15 02:24:30.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.286471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.293964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e4140 00:22:30.984 [2024-07-15 02:24:30.294838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.294885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.305395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eaab8 00:22:30.984 [2024-07-15 02:24:30.305951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.305986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.316366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fd208 00:22:30.984 [2024-07-15 02:24:30.316867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.316899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.327209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ef6a8 00:22:30.984 [2024-07-15 02:24:30.327718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.327752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.338357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e23b8 00:22:30.984 [2024-07-15 02:24:30.339097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.339146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.349231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e0ea0 00:22:30.984 [2024-07-15 02:24:30.349685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.349716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.358909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f3a28 00:22:30.984 [2024-07-15 02:24:30.359053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.359074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:30.984 [2024-07-15 02:24:30.372129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f5378 00:22:30.984 [2024-07-15 02:24:30.373787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.984 [2024-07-15 02:24:30.373862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.385243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f3a28 00:22:30.985 [2024-07-15 02:24:30.386591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.386634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.393344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8e88 00:22:30.985 [2024-07-15 02:24:30.393719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.393749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.406642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fa3a0 00:22:30.985 [2024-07-15 02:24:30.407735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.407782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.414532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc128 00:22:30.985 [2024-07-15 02:24:30.414638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.414658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.427398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed4e8 00:22:30.985 [2024-07-15 02:24:30.428217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.428265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.437386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e99d8 00:22:30.985 [2024-07-15 02:24:30.438606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.438666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.447980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ddc00 00:22:30.985 [2024-07-15 02:24:30.448454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.448482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.458827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e2c28 00:22:30.985 [2024-07-15 02:24:30.459406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.459440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.469728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f0bc0 00:22:30.985 [2024-07-15 02:24:30.470902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.470951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.480089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f31b8 00:22:30.985 [2024-07-15 02:24:30.481829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.490424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f92c0 00:22:30.985 [2024-07-15 02:24:30.491471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.491519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.500899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eb760 00:22:30.985 [2024-07-15 02:24:30.502814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.502866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.512084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ff3c8 00:22:30.985 [2024-07-15 02:24:30.513654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.523874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fa3a0 00:22:30.985 [2024-07-15 02:24:30.525029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.525077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:30.985 [2024-07-15 02:24:30.531841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ebfd0 00:22:30.985 [2024-07-15 02:24:30.532023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.985 [2024-07-15 02:24:30.532043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.544031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ee5c8 00:22:31.246 [2024-07-15 02:24:30.544770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.544819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.554017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e12d8 00:22:31.246 [2024-07-15 02:24:30.555245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.555293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.564579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ef6a8 00:22:31.246 [2024-07-15 02:24:30.565913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.565966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.577261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f5be8 00:22:31.246 [2024-07-15 02:24:30.578218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.578268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.587272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e0630 00:22:31.246 [2024-07-15 02:24:30.588619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.588681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.597985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ec840 00:22:31.246 [2024-07-15 02:24:30.598615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.598699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.610757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc128 00:22:31.246 [2024-07-15 02:24:30.612071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.246 [2024-07-15 02:24:30.612118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.246 [2024-07-15 02:24:30.618098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e9e10 00:22:31.247 [2024-07-15 02:24:30.619336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.619384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.630617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f96f8 00:22:31.247 [2024-07-15 02:24:30.631494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.631521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.641205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ef270 00:22:31.247 [2024-07-15 02:24:30.642564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.642609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.652573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e95a0 00:22:31.247 [2024-07-15 02:24:30.653167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.653197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.666379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e5a90 00:22:31.247 [2024-07-15 02:24:30.667644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.667699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.674479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8a50 00:22:31.247 [2024-07-15 02:24:30.674771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.674791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.687602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fda78 00:22:31.247 [2024-07-15 02:24:30.688412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.688459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.697415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f3a28 00:22:31.247 [2024-07-15 02:24:30.698381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.698434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.711132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ef270 00:22:31.247 [2024-07-15 02:24:30.712293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.712339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.721141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6cc8 00:22:31.247 [2024-07-15 02:24:30.722489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.722541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.731808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6738 00:22:31.247 [2024-07-15 02:24:30.733272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.744846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f3e60 00:22:31.247 [2024-07-15 02:24:30.746089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.746155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.754360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e0ea0 00:22:31.247 [2024-07-15 02:24:30.755719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.755768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.765125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190df988 00:22:31.247 [2024-07-15 02:24:30.765696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.765726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.775987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fbcf0 00:22:31.247 [2024-07-15 02:24:30.776549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.776582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.786628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e84c0 00:22:31.247 [2024-07-15 02:24:30.787190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.787222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.247 [2024-07-15 02:24:30.797023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e73e0 00:22:31.247 [2024-07-15 02:24:30.797581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.247 [2024-07-15 02:24:30.797623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.807783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f4f40 00:22:31.507 [2024-07-15 02:24:30.808572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.808630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.818276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190dece0 00:22:31.507 [2024-07-15 02:24:30.818819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.818851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.828225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ea248 00:22:31.507 [2024-07-15 02:24:30.829408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.829457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.838897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed920 00:22:31.507 [2024-07-15 02:24:30.839484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.839518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.849423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190df118 00:22:31.507 [2024-07-15 02:24:30.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.850067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.859880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ebfd0 00:22:31.507 [2024-07-15 02:24:30.860459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.860493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.870373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc998 00:22:31.507 [2024-07-15 02:24:30.870953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.870984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.880777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e8088 00:22:31.507 [2024-07-15 02:24:30.881338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.881369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.893666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc560 00:22:31.507 [2024-07-15 02:24:30.895056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.895105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.904088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc128 00:22:31.507 [2024-07-15 02:24:30.905472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.905521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.914483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc998 00:22:31.507 [2024-07-15 02:24:30.915901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.915950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.924882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e8088 00:22:31.507 [2024-07-15 02:24:30.926301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.926352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.936034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6cc8 00:22:31.507 [2024-07-15 02:24:30.937395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.937445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.946664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f92c0 00:22:31.507 [2024-07-15 02:24:30.948018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.948066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.957538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f7100 00:22:31.507 [2024-07-15 02:24:30.959156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.959208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.966919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e95a0 00:22:31.507 [2024-07-15 02:24:30.968040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.968088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.977329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f7100 00:22:31.507 [2024-07-15 02:24:30.979122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.979172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.988025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eee38 00:22:31.507 [2024-07-15 02:24:30.989004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.989053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:30.997863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6890 00:22:31.507 [2024-07-15 02:24:30.997986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:30.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:31.008194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f5be8 00:22:31.507 [2024-07-15 02:24:31.009328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:31.009377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:31.020772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190feb58 00:22:31.507 [2024-07-15 02:24:31.021565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:31.021636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:31.032043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e4de8 00:22:31.507 [2024-07-15 02:24:31.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:31.032963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:31.041594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e3498 00:22:31.507 [2024-07-15 02:24:31.041975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:31.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.507 [2024-07-15 02:24:31.054997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fd208 00:22:31.507 [2024-07-15 02:24:31.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.507 [2024-07-15 02:24:31.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.065259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f4298 00:22:31.766 [2024-07-15 02:24:31.066799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.066866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.076167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f4f40 00:22:31.766 [2024-07-15 02:24:31.076921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.076970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.087987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f1ca0 00:22:31.766 [2024-07-15 02:24:31.088899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.088948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.098416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fda78 00:22:31.766 [2024-07-15 02:24:31.099333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.107722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e49b0 00:22:31.766 [2024-07-15 02:24:31.108730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.108778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.117989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f7970 00:22:31.766 [2024-07-15 02:24:31.119539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.119574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.130701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f35f0 00:22:31.766 [2024-07-15 02:24:31.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.131846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.138719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed0b0 00:22:31.766 [2024-07-15 02:24:31.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.138834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.151677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190edd58 00:22:31.766 [2024-07-15 02:24:31.152477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.152525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.161859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6fa8 00:22:31.766 [2024-07-15 02:24:31.163168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.163217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.172881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fb8b8 00:22:31.766 [2024-07-15 02:24:31.173351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.173385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.185672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f92c0 00:22:31.766 [2024-07-15 02:24:31.186864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.186911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.193643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fef90 00:22:31.766 [2024-07-15 02:24:31.193871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.766 [2024-07-15 02:24:31.193891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.766 [2024-07-15 02:24:31.205670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ddc00 00:22:31.767 [2024-07-15 02:24:31.206454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.216637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e99d8 00:22:31.767 [2024-07-15 02:24:31.217439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.217489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.226938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f5be8 00:22:31.767 [2024-07-15 02:24:31.228471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.228522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.236759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e6738 00:22:31.767 [2024-07-15 02:24:31.237712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.237761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.249117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fbcf0 00:22:31.767 [2024-07-15 02:24:31.250120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.250184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.258634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fc998 00:22:31.767 [2024-07-15 02:24:31.259769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.259816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.268939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed4e8 00:22:31.767 [2024-07-15 02:24:31.270658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.270721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.279595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e7c50 00:22:31.767 [2024-07-15 02:24:31.280590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.280662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.290935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f31b8 00:22:31.767 [2024-07-15 02:24:31.292027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.292076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.301180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e2c28 00:22:31.767 [2024-07-15 02:24:31.302354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.302404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.767 [2024-07-15 02:24:31.312741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f20d8 00:22:31.767 [2024-07-15 02:24:31.313523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.767 [2024-07-15 02:24:31.313572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.323154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ecc78 00:22:32.026 [2024-07-15 02:24:31.323931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.323976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.331951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e49b0 00:22:32.026 [2024-07-15 02:24:31.333058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.333105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.341999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e84c0 00:22:32.026 [2024-07-15 02:24:31.342285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.354033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fd208 00:22:32.026 [2024-07-15 02:24:31.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.365463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eee38 00:22:32.026 [2024-07-15 02:24:31.366042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.366074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.377173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f6890 00:22:32.026 [2024-07-15 02:24:31.378600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.378689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.387933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f46d0 00:22:32.026 [2024-07-15 02:24:31.388613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.388654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.398818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f31b8 00:22:32.026 [2024-07-15 02:24:31.399563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.399593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.408266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f7970 00:22:32.026 [2024-07-15 02:24:31.408496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.408515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.421435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed0b0 00:22:32.026 [2024-07-15 02:24:31.422204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.434993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190fa3a0 00:22:32.026 [2024-07-15 02:24:31.436314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.436359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.442965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e5658 00:22:32.026 [2024-07-15 02:24:31.443282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.443322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.455207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190ed0b0 00:22:32.026 [2024-07-15 02:24:31.455998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.026 [2024-07-15 02:24:31.456028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.026 [2024-07-15 02:24:31.465031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e8d30 00:22:32.026 [2024-07-15 02:24:31.466332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.466378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.476107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8618 00:22:32.027 [2024-07-15 02:24:31.476637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.476669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.488171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e1f80 00:22:32.027 [2024-07-15 02:24:31.489218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.489261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.495956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f2d80 00:22:32.027 [2024-07-15 02:24:31.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.496055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.508669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e12d8 00:22:32.027 [2024-07-15 02:24:31.509425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.509453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.520721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190eee38 00:22:32.027 [2024-07-15 02:24:31.521987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.522034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.528117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190dfdc0 00:22:32.027 [2024-07-15 02:24:31.529288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.529333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.540390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e0a68 00:22:32.027 [2024-07-15 02:24:31.541286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.541345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.551041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e9168 00:22:32.027 [2024-07-15 02:24:31.551956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.551999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.561147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e49b0 00:22:32.027 [2024-07-15 02:24:31.562358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.562388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.572433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f8e88 00:22:32.027 [2024-07-15 02:24:31.573298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.573327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:32.027 [2024-07-15 02:24:31.581595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e9e10 00:22:32.027 [2024-07-15 02:24:31.582581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.027 [2024-07-15 02:24:31.582651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.285 [2024-07-15 02:24:31.592969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190e7c50 00:22:32.285 [2024-07-15 02:24:31.594138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-07-15 02:24:31.594184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.285 [2024-07-15 02:24:31.604097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6b50) with pdu=0x2000190f1ca0 00:22:32.285 [2024-07-15 02:24:31.604950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.285 [2024-07-15 02:24:31.604994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:32.285 00:22:32.285 Latency(us) 00:22:32.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:32.285 nvme0n1 : 2.00 23291.56 90.98 0.00 0.00 5489.53 2115.03 13047.62 00:22:32.285 =================================================================================================================== 00:22:32.285 Total : 23291.56 90.98 0.00 0.00 5489.53 2115.03 13047.62 00:22:32.285 0 00:22:32.285 02:24:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:32.285 02:24:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:32.285 02:24:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:32.285 02:24:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:32.285 | .driver_specific 00:22:32.285 | .nvme_error 00:22:32.285 | .status_code 00:22:32.285 | .command_transient_transport_error' 00:22:32.543 02:24:31 -- host/digest.sh@71 -- # (( 182 > 0 )) 00:22:32.543 02:24:31 -- host/digest.sh@73 -- # killprocess 96913 00:22:32.543 02:24:31 -- common/autotest_common.sh@926 -- # '[' -z 96913 ']' 00:22:32.543 02:24:31 -- common/autotest_common.sh@930 -- # kill -0 96913 00:22:32.543 02:24:31 -- common/autotest_common.sh@931 -- # uname 00:22:32.543 02:24:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.543 02:24:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96913 00:22:32.543 killing process with pid 96913 00:22:32.543 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.543 00:22:32.543 Latency(us) 00:22:32.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.543 =================================================================================================================== 00:22:32.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.543 02:24:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:32.543 02:24:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:32.543 02:24:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96913' 00:22:32.543 02:24:31 -- common/autotest_common.sh@945 -- # kill 96913 00:22:32.543 02:24:31 -- common/autotest_common.sh@950 -- # wait 96913 00:22:32.801 02:24:32 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:32.801 02:24:32 -- host/digest.sh@54 -- # local rw bs qd 00:22:32.801 02:24:32 -- host/digest.sh@56 -- # rw=randwrite 00:22:32.801 02:24:32 -- host/digest.sh@56 -- # bs=131072 00:22:32.801 02:24:32 -- host/digest.sh@56 -- # qd=16 00:22:32.801 02:24:32 -- host/digest.sh@58 -- # bperfpid=96998 00:22:32.801 02:24:32 -- host/digest.sh@60 -- # waitforlisten 96998 /var/tmp/bperf.sock 00:22:32.801 02:24:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:32.801 02:24:32 -- common/autotest_common.sh@819 -- # '[' -z 96998 ']' 00:22:32.801 02:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:32.801 02:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.802 02:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:32.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:32.802 02:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.802 02:24:32 -- common/autotest_common.sh@10 -- # set +x 00:22:32.802 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:32.802 Zero copy mechanism will not be used. 00:22:32.802 [2024-07-15 02:24:32.196514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:32.802 [2024-07-15 02:24:32.196638] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96998 ] 00:22:32.802 [2024-07-15 02:24:32.335279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.059 [2024-07-15 02:24:32.422003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.624 02:24:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.624 02:24:33 -- common/autotest_common.sh@852 -- # return 0 00:22:33.624 02:24:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:33.624 02:24:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:33.882 02:24:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:33.882 02:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.882 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:22:33.882 02:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.882 02:24:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.882 02:24:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:34.448 nvme0n1 00:22:34.448 02:24:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:34.448 02:24:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.448 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:22:34.448 02:24:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.448 02:24:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:34.448 02:24:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:34.448 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:34.448 Zero copy mechanism will not be used. 00:22:34.448 Running I/O for 2 seconds... 00:22:34.448 [2024-07-15 02:24:33.864057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.864352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.864384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.868751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.868900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.873151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.873275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.873298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.877432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.877556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.877577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.881659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.881794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.881844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.885935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.886023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.886047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.890316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.890471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.890494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.894684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.894909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.894931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.899118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.899382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.448 [2024-07-15 02:24:33.899410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.448 [2024-07-15 02:24:33.903394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.448 [2024-07-15 02:24:33.903532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.903554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.907834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.907950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.907986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.912207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.912309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.912331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.916482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.916584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.916605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.920866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.921012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.921036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.925109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.925245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.925267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.929539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.929774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.929798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.933934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.934142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.934163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.938198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.938338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.938360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.942496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.942623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.942645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.946767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.946868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.946889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.951046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.951169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.951191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.955357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.955489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.955511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.959803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.959941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.959978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.964160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.964412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.968532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.968809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.968837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.972850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.972973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.977201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.977336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.977357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.981436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.981558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.981580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.985682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.985781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.989973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.990142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.990164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.994232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.994354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.994377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:33.998637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:33.998861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:33.998890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.449 [2024-07-15 02:24:34.002960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.449 [2024-07-15 02:24:34.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.449 [2024-07-15 02:24:34.003217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.007221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.007356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.007377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.011754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.011893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.016259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.016348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.020591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.020762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.024917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.025050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.025072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.029257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.029393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.029415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.033770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.034033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.034055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.038078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.038335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.038362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.042484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.042606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.042629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.046826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.046949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.046969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.051125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.051228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.051248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.055476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.055572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.055593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.059892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.060054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.060075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.064237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.064363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.064385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.068708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.068934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.068956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.073065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.073294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.073325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.077372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.077534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.077556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.081688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.081802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.085943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.086056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.086077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.090168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.090276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.090299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.708 [2024-07-15 02:24:34.094525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.708 [2024-07-15 02:24:34.094680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.708 [2024-07-15 02:24:34.094702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.098926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.099086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.099107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.103453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.103703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.103726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.107776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.107973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.107994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.112040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.112178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.112199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.116327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.116453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.116474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.120569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.120725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.120746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.124826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.124936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.124957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.129021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.129158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.129179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.133301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.133437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.133458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.137624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.137877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.137899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.141745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.141985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.142006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.146017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.146154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.146176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.150234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.150339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.150359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.154494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.154607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.154628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.158761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.158886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.158906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.162995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.163136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.163157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.167291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.167444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.167465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.171710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.171932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.171953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.175891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.176118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.176139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.180111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.180246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.184435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.184552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.184573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.188666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.188772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.188792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.192822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.192929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.192949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.196878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.197027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.197048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.201045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.201184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.201204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.205260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.205481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.205502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.209515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.209769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.209793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.213873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.214023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.709 [2024-07-15 02:24:34.214045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.709 [2024-07-15 02:24:34.218003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.709 [2024-07-15 02:24:34.218141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.218162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.222187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.222303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.222323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.226367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.226488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.226508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.230600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.230777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.230799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.234887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.235043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.239157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.239380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.239401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.243339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.243590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.243626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.247685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.247840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.247860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.251898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.252013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.252033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.256058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.256167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.256187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.710 [2024-07-15 02:24:34.260263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.710 [2024-07-15 02:24:34.260366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.710 [2024-07-15 02:24:34.260386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.264515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.264661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.264682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.268698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.268843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.268863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.272999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.273219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.273239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.277168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.277413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.281654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.281825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.281847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.285862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.285969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.285991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.290039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.290133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.290184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.294341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.294466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.294487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.298638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.298788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.298809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.302947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.303086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.303107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.307340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.307558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.307579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.311689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.311928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.311949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.315980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.316116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.316137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.320234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.320352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.320373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.324490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.324597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.324630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.328676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.328792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.328813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.332888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.333043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.333064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.337056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.337196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.337216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.341323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.341545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.341565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.345500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.345757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.345779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.349650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.349844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.353847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.353944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.353965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.358063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.358199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.358220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.362346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.362464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.362485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.366722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.366887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.366909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.371099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.371238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.371259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.375478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.375714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.379712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.379957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.379978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.384028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.384184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.384205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.388321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.388435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.388456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.392535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.392658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.392679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.396701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.396805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.396826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.401010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.401154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.401186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.405223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.405361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.405382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.409447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.409696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.409717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.413645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.413912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.413933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.417947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.418097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.418119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.422263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.422378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.422399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.426541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.426665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.426686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.430873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.430981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.431002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.435241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.435380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.435401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.439598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.439781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.439803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.443914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.444150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.444171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.448182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.448393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.448414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.452553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.452712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.452734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.456766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.456894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.456915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.968 [2024-07-15 02:24:34.460994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.968 [2024-07-15 02:24:34.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.968 [2024-07-15 02:24:34.461132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.465232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.465341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.465362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.469587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.469737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.469759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.473870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.473992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.474014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.478253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.478485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.478508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.482710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.482910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.482937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.487181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.487317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.487338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.491383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.491498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.491519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.495699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.495815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.495836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.499916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.500022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.500043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.504168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.504300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.504320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.508440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.508579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.508600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.512832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.513071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.513097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.516990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.517236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.517257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.969 [2024-07-15 02:24:34.521243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:34.969 [2024-07-15 02:24:34.521379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.969 [2024-07-15 02:24:34.521400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.525485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.525600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.525621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.529798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.529942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.529963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.534121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.534267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.534288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.538418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.538564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.538585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.542786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.542946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.542968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.547011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.547240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.547271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.551209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.551421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.551442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.555524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.555694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.555715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.559889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.560004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.560024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.564094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.564200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.564220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.568335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.568461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.568481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.572708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.572847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.572868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.577055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.577190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.577211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.581286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.581512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.581534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.585571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.585847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.585868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.589875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.590030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.590051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.594097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.594241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.594261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.598425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.598568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.598590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.602697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.602809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.602830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.606967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.607102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.607122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.611133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.611282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.611302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.615452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.615690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.615711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.619689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.619905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.619925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.228 [2024-07-15 02:24:34.623898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.228 [2024-07-15 02:24:34.624052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.228 [2024-07-15 02:24:34.624074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.628136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.628251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.628272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.632421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.632544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.636747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.636852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.636873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.640964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.641114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.641134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.645242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.645379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.645399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.649586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.649840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.649863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.653848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.654079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.654100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.658069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.658219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.658239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.662364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.662500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.662521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.666604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.666742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.666763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.670934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.671038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.671058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.675399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.675546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.675567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.679781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.679923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.679945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.684304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.684537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.684559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.688715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.688951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.688973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.693061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.693218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.693240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.697362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.697479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.697499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.701765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.701904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.701927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.706056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.706180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.706201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.710406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.710547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.710569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.714813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.714984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.715005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.719346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.719570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.719591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.723827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.724084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.724106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.728195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.728339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.732581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.732695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.736881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.736996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.737017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.741152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.741297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.745612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.745772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.745793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.750036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.229 [2024-07-15 02:24:34.750187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.229 [2024-07-15 02:24:34.750208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.229 [2024-07-15 02:24:34.754432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.754659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.754681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.758790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.759023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.759044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.763115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.763272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.763292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.767596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.767742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.772008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.772102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.772123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.776311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.776405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.776427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.230 [2024-07-15 02:24:34.780745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.230 [2024-07-15 02:24:34.780872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.230 [2024-07-15 02:24:34.780894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.785163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.785304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.785325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.789722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.789951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.789973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.794030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.794266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.794288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.798475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.798675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.798697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.802864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.802972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.803009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.807307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.807415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.807437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.811568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.811673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.811696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.489 [2024-07-15 02:24:34.815869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.489 [2024-07-15 02:24:34.816035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.489 [2024-07-15 02:24:34.816057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.820054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.820217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.820237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.824449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.824708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.824739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.828776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.829002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.829024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.833126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.833317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.833338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.837415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.837540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.837561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.841721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.841863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.841885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.846031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.850365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.850533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.850554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.854675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.854809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.859076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.859303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.859335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.863330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.863590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.863626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.867587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.867826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.872059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.872180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.872200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.876403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.876526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.876548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.880687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.880789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.880809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.884965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.885129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.885149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.889150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.889291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.889312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.893427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.893680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.897642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.897868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.897890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.901889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.902081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.902117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.906034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.906159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.906179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.910193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.910300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.910321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.914454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.914580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.914601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.918786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.918956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.918978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.922962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.923116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.923137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.927271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.927487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.927508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.931392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.931674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.931702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.935766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.935949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.490 [2024-07-15 02:24:34.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.490 [2024-07-15 02:24:34.939916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.490 [2024-07-15 02:24:34.940060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.940082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.944126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.944236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.944257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.948289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.948413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.948433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.952499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.952695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.956706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.956834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.956855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.961019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.961237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.961258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.965259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.965505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.965525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.969494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.969714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.973722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.973839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.977973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.978088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.978123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.982277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.982407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.982430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.986606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.986787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.986809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.990938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.991075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.991096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.995322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.995542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:34.999499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:34.999733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:34.999754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.003674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.003861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.003882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.007913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.008061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.008082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.012122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.012244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.012264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.016398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.016541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.016564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.020719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.020884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.020905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.024964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.025094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.025114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.029344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.029573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.029594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.033646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.033879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.037924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.038117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.038154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.491 [2024-07-15 02:24:35.042170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.491 [2024-07-15 02:24:35.042273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.491 [2024-07-15 02:24:35.042293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.046427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.046574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.046595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.050715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.050817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.050837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.054880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.055043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.055063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.059009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.059190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.059210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.063403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.063635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.063669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.067674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.067900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.067926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.071954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.072142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.751 [2024-07-15 02:24:35.072162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.751 [2024-07-15 02:24:35.076108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.751 [2024-07-15 02:24:35.076232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.076252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.080351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.080454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.080475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.084483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.084587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.084608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.088761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.088934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.088955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.092949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.093088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.093108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.097262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.097500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.101501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.101763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.101785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.106075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.106282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.106303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.110450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.110547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.110569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.114753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.114865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.114887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.119019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.119133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.119156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.123374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.123549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.127740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.127903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.127925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.132222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.132453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.132475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.136554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.136776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.140923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.141110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.141132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.145183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.145338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.145359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.149376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.149502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.149523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.153545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.153663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.157827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.157986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.158008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.162052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.162219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.162241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.166356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.166598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.752 [2024-07-15 02:24:35.166619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.752 [2024-07-15 02:24:35.170740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.752 [2024-07-15 02:24:35.170932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.170953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.175036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.175224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.175246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.179243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.179380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.183429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.183532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.183553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.187695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.187799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.187819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.191888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.192072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.196092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.196231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.196252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.200509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.200744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.200772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.204770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.205025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.205052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.208991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.209177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.209199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.213214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.213363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.217409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.217521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.217542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.221847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.221954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.221975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.226105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.226268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.226290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.230399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.230561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.230582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.234774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.235020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.235047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.239055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.239282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.243328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.243533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.243554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.247636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.247741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.247762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.251875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.251997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.252034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.256095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.256222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.256243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.260488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.260683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.753 [2024-07-15 02:24:35.260705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.753 [2024-07-15 02:24:35.264813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.753 [2024-07-15 02:24:35.264969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.264992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.269296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.269516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.269537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.273499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.273724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.273745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.277887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.278062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.278085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.282054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.282175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.282197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.286290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.286432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.290575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.290701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.290723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.294838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.294987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.295009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.299102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.299256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.299277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:35.754 [2024-07-15 02:24:35.303427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:35.754 [2024-07-15 02:24:35.303676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.754 [2024-07-15 02:24:35.303697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.307810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.308043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.308064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.312124] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.312310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.312331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.316431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.316542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.316562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.320776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.320881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.320902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.324967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.325067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.325089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.329281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.329452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.333557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.333706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.333728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.337972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.338187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.338210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.342304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.342530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.342551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.346553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.346766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.346787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.350932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.355190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.355306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.359475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.359582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.359604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.363886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.364056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.364078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.368159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.368327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.368349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.372569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.372810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.372832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.376887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.377139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.377159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.381246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.381434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.381456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.385552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.385706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.385728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.389915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.390023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.390044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.394237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.394358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.394379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.398762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.398928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.398949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.403008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.403145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.403166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.407394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.407615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.407636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.411692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.411934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.411960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.416038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.416224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.416245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.420357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.420458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.420479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.424715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.424837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.014 [2024-07-15 02:24:35.424859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.014 [2024-07-15 02:24:35.428946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.014 [2024-07-15 02:24:35.429051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.429073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.433448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.433678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.437803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.437983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.442111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.442369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.446467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.446739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.446761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.450789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.450955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.450976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.455187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.455290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.455311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.459369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.459489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.459510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.463790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.463890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.463911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.468128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.468289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.468325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.472440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.472574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.472595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.476763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.476976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.476997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.480964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.481218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.481254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.485193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.485367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.485388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.489366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.489503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.489524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.493625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.493769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.497867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.497973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.497995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.502068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.502280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.506290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.506426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.506463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.510648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.510894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.510915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.514972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.515199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.515218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.519119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.519301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.523318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.523447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.527490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.527614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.527648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.531865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.532010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.536294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.536459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.536480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.540554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.540726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.540747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.544857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.545100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.545127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.549075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.549283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.549304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.553221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.553402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.553423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.557516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.557637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.561770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.561891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.015 [2024-07-15 02:24:35.561911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.015 [2024-07-15 02:24:35.565936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.015 [2024-07-15 02:24:35.566036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.016 [2024-07-15 02:24:35.566057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.570241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.570397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.570417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.574511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.574695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.574717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.578921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.579134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.579155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.583210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.583422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.583442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.587403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.587585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.587606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.591700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.591813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.591834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.595922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.596023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.596043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.600016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.600133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.600154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.604193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.604358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.604378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.608380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.608519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.608540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.612713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.612935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.612957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.616850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.617068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.617089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.621081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.621266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.621286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.625263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.625361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.629394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.629492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.629512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.633540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.633638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.633672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.637903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.638064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.638086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.642058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.642204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.642225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.646362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.646578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.646600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.650534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.650769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.650791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.654791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.654972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.654993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.659074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.659197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.659218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.663262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.663416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.663436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.667581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.667720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.667741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.671791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.671949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.671970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.676004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.676166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.680338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.680554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.680575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.684595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.684859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.684886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.688722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.688907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.688927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.692942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.693065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.697095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.697225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.697245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.701248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.701366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.705484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.705662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.705683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.709651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.709796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.709841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.713907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.714127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.714168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.717987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.718197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.718223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.722121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.275 [2024-07-15 02:24:35.722313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.275 [2024-07-15 02:24:35.722334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.275 [2024-07-15 02:24:35.726324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.726424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.726445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.730494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.730602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.730640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.734769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.734882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.734902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.738961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.739137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.743275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.743413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.743435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.747877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.748118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.752307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.752554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.752575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.756812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.756983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.757005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.761053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.761194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.765318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.765422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.765443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.769567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.769699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.769719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.773854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.774013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.774035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.778013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.778210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.778231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.782364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.782589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.782611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.786779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.786978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.786999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.791260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.791446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.791467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.795521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.795692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.800008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.800110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.800132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.804315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.804415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.804435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.808764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.808947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.813200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.813363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.813383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.817740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.817977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.818005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.822158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.822398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.822418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.276 [2024-07-15 02:24:35.826602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.276 [2024-07-15 02:24:35.826824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.276 [2024-07-15 02:24:35.826845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.533 [2024-07-15 02:24:35.830835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.533 [2024-07-15 02:24:35.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.533 [2024-07-15 02:24:35.830959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.835032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.835182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.839321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.839420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.843681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.843841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.843861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.848031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.848193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.848214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.852319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.852531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.852551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.534 [2024-07-15 02:24:35.856500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20a6cf0) with pdu=0x2000190fef90 00:22:36.534 [2024-07-15 02:24:35.856608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.534 [2024-07-15 02:24:35.856639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.534 00:22:36.534 Latency(us) 00:22:36.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:36.534 nvme0n1 : 2.00 7204.24 900.53 0.00 0.00 2215.66 1772.45 6494.02 00:22:36.534 =================================================================================================================== 00:22:36.534 Total : 7204.24 900.53 0.00 0.00 2215.66 1772.45 6494.02 00:22:36.534 0 00:22:36.534 02:24:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:36.534 02:24:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:36.534 02:24:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:36.534 02:24:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:36.534 | .driver_specific 00:22:36.534 | .nvme_error 00:22:36.534 | .status_code 00:22:36.534 | .command_transient_transport_error' 00:22:36.792 02:24:36 -- host/digest.sh@71 -- # (( 465 > 0 )) 00:22:36.792 02:24:36 -- host/digest.sh@73 -- # killprocess 96998 00:22:36.792 02:24:36 -- common/autotest_common.sh@926 -- # '[' -z 96998 ']' 00:22:36.792 02:24:36 -- common/autotest_common.sh@930 -- # kill -0 96998 00:22:36.792 02:24:36 -- common/autotest_common.sh@931 -- # uname 00:22:36.792 02:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:36.792 02:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96998 00:22:36.792 killing process with pid 96998 00:22:36.792 Received shutdown signal, test time was about 2.000000 seconds 00:22:36.792 00:22:36.792 Latency(us) 00:22:36.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.792 =================================================================================================================== 00:22:36.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.792 02:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:36.792 02:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:36.792 02:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96998' 00:22:36.792 02:24:36 -- common/autotest_common.sh@945 -- # kill 96998 00:22:36.792 02:24:36 -- common/autotest_common.sh@950 -- # wait 96998 00:22:36.792 02:24:36 -- host/digest.sh@115 -- # killprocess 96687 00:22:36.792 02:24:36 -- common/autotest_common.sh@926 -- # '[' -z 96687 ']' 00:22:36.792 02:24:36 -- common/autotest_common.sh@930 -- # kill -0 96687 00:22:36.792 02:24:36 -- common/autotest_common.sh@931 -- # uname 00:22:36.792 02:24:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:37.049 02:24:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96687 00:22:37.049 killing process with pid 96687 00:22:37.049 02:24:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:37.049 02:24:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:37.049 02:24:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96687' 00:22:37.049 02:24:36 -- common/autotest_common.sh@945 -- # kill 96687 00:22:37.049 02:24:36 -- common/autotest_common.sh@950 -- # wait 96687 00:22:37.049 ************************************ 00:22:37.049 END TEST nvmf_digest_error 00:22:37.049 ************************************ 00:22:37.049 00:22:37.049 real 0m18.333s 00:22:37.049 user 0m34.763s 00:22:37.049 sys 0m4.804s 00:22:37.049 02:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.049 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:22:37.306 02:24:36 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:37.306 02:24:36 -- host/digest.sh@139 -- # nvmftestfini 00:22:37.306 02:24:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:37.306 02:24:36 -- nvmf/common.sh@116 -- # sync 00:22:37.306 02:24:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:37.306 02:24:36 -- nvmf/common.sh@119 -- # set +e 00:22:37.306 02:24:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:37.306 02:24:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:37.306 rmmod nvme_tcp 00:22:37.306 rmmod nvme_fabrics 00:22:37.306 rmmod nvme_keyring 00:22:37.306 02:24:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:37.306 02:24:36 -- nvmf/common.sh@123 -- # set -e 00:22:37.306 02:24:36 -- nvmf/common.sh@124 -- # return 0 00:22:37.306 02:24:36 -- nvmf/common.sh@477 -- # '[' -n 96687 ']' 00:22:37.306 02:24:36 -- nvmf/common.sh@478 -- # killprocess 96687 00:22:37.306 02:24:36 -- common/autotest_common.sh@926 -- # '[' -z 96687 ']' 00:22:37.306 02:24:36 -- common/autotest_common.sh@930 -- # kill -0 96687 00:22:37.306 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (96687) - No such process 00:22:37.306 Process with pid 96687 is not found 00:22:37.306 02:24:36 -- common/autotest_common.sh@953 -- # echo 'Process with pid 96687 is not found' 00:22:37.306 02:24:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:37.306 02:24:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:37.306 02:24:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:37.306 02:24:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.306 02:24:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:37.306 02:24:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.306 02:24:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.306 02:24:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.306 02:24:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:37.306 ************************************ 00:22:37.306 END TEST nvmf_digest 00:22:37.306 ************************************ 00:22:37.306 00:22:37.306 real 0m37.394s 00:22:37.306 user 1m9.672s 00:22:37.306 sys 0m9.785s 00:22:37.306 02:24:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.306 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:22:37.306 02:24:36 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:37.306 02:24:36 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:37.306 02:24:36 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:37.306 02:24:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.306 02:24:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.306 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:22:37.306 ************************************ 00:22:37.306 START TEST nvmf_mdns_discovery 00:22:37.306 ************************************ 00:22:37.306 02:24:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:37.565 * Looking for test storage... 00:22:37.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.565 02:24:36 -- nvmf/common.sh@7 -- # uname -s 00:22:37.565 02:24:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.565 02:24:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.565 02:24:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.565 02:24:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.565 02:24:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.565 02:24:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.565 02:24:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.565 02:24:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.565 02:24:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.565 02:24:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:37.565 02:24:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:37.565 02:24:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.565 02:24:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.565 02:24:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.565 02:24:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.565 02:24:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.565 02:24:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.565 02:24:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.565 02:24:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.565 02:24:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.565 02:24:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.565 02:24:36 -- paths/export.sh@5 -- # export PATH 00:22:37.565 02:24:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.565 02:24:36 -- nvmf/common.sh@46 -- # : 0 00:22:37.565 02:24:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.565 02:24:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.565 02:24:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.565 02:24:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.565 02:24:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.565 02:24:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.565 02:24:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.565 02:24:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:37.565 02:24:36 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:37.565 02:24:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.565 02:24:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.565 02:24:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.565 02:24:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.565 02:24:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.565 02:24:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.565 02:24:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.565 02:24:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.565 02:24:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:37.565 02:24:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:37.565 02:24:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.565 02:24:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.565 02:24:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.565 02:24:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:37.565 02:24:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.565 02:24:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.565 02:24:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.565 02:24:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.565 02:24:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.565 02:24:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.565 02:24:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.565 02:24:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.565 02:24:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:37.565 02:24:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:37.565 Cannot find device "nvmf_tgt_br" 00:22:37.565 02:24:36 -- nvmf/common.sh@154 -- # true 00:22:37.565 02:24:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.565 Cannot find device "nvmf_tgt_br2" 00:22:37.565 02:24:37 -- nvmf/common.sh@155 -- # true 00:22:37.565 02:24:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:37.565 02:24:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:37.565 Cannot find device "nvmf_tgt_br" 00:22:37.565 02:24:37 -- nvmf/common.sh@157 -- # true 00:22:37.565 02:24:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:37.565 Cannot find device "nvmf_tgt_br2" 00:22:37.565 02:24:37 -- nvmf/common.sh@158 -- # true 00:22:37.566 02:24:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:37.566 02:24:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:37.566 02:24:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.566 02:24:37 -- nvmf/common.sh@161 -- # true 00:22:37.566 02:24:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.566 02:24:37 -- nvmf/common.sh@162 -- # true 00:22:37.566 02:24:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.566 02:24:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.566 02:24:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.566 02:24:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.566 02:24:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.566 02:24:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.823 02:24:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.823 02:24:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.823 02:24:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.823 02:24:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:37.823 02:24:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:37.823 02:24:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:37.823 02:24:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:37.823 02:24:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.823 02:24:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.823 02:24:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.823 02:24:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:37.823 02:24:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:37.823 02:24:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.823 02:24:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.823 02:24:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.823 02:24:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.823 02:24:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.823 02:24:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:37.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:22:37.823 00:22:37.823 --- 10.0.0.2 ping statistics --- 00:22:37.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.823 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:37.823 02:24:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:37.823 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.823 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:22:37.823 00:22:37.823 --- 10.0.0.3 ping statistics --- 00:22:37.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.824 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:37.824 02:24:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:37.824 00:22:37.824 --- 10.0.0.1 ping statistics --- 00:22:37.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.824 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:37.824 02:24:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.824 02:24:37 -- nvmf/common.sh@421 -- # return 0 00:22:37.824 02:24:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:37.824 02:24:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.824 02:24:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:37.824 02:24:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:37.824 02:24:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.824 02:24:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:37.824 02:24:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:37.824 02:24:37 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:37.824 02:24:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:37.824 02:24:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:37.824 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:22:37.824 02:24:37 -- nvmf/common.sh@469 -- # nvmfpid=97298 00:22:37.824 02:24:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:37.824 02:24:37 -- nvmf/common.sh@470 -- # waitforlisten 97298 00:22:37.824 02:24:37 -- common/autotest_common.sh@819 -- # '[' -z 97298 ']' 00:22:37.824 02:24:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.824 02:24:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:37.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.824 02:24:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.824 02:24:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:37.824 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:22:37.824 [2024-07-15 02:24:37.339830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:37.824 [2024-07-15 02:24:37.339954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.081 [2024-07-15 02:24:37.479311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.081 [2024-07-15 02:24:37.561756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:38.081 [2024-07-15 02:24:37.561905] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.081 [2024-07-15 02:24:37.561920] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.081 [2024-07-15 02:24:37.561928] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.081 [2024-07-15 02:24:37.561953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.015 02:24:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:39.015 02:24:38 -- common/autotest_common.sh@852 -- # return 0 00:22:39.015 02:24:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:39.015 02:24:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 02:24:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 [2024-07-15 02:24:38.487179] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 [2024-07-15 02:24:38.495292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 null0 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 null1 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 null2 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 null3 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:39.015 02:24:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.015 02:24:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@47 -- # hostpid=97348 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:39.015 02:24:38 -- host/mdns_discovery.sh@48 -- # waitforlisten 97348 /tmp/host.sock 00:22:39.015 02:24:38 -- common/autotest_common.sh@819 -- # '[' -z 97348 ']' 00:22:39.015 02:24:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:39.015 02:24:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.015 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:39.015 02:24:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:39.015 02:24:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.015 02:24:38 -- common/autotest_common.sh@10 -- # set +x 00:22:39.273 [2024-07-15 02:24:38.593811] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:39.273 [2024-07-15 02:24:38.593927] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97348 ] 00:22:39.273 [2024-07-15 02:24:38.735268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.273 [2024-07-15 02:24:38.824486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:39.273 [2024-07-15 02:24:38.824646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.207 02:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.207 02:24:39 -- common/autotest_common.sh@852 -- # return 0 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@57 -- # avahipid=97377 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:40.207 02:24:39 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:40.207 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:40.207 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:40.207 Successfully dropped root privileges. 00:22:40.207 avahi-daemon 0.8 starting up. 00:22:40.207 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:40.207 Successfully called chroot(). 00:22:40.207 Successfully dropped remaining capabilities. 00:22:41.141 No service file found in /etc/avahi/services. 00:22:41.141 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:41.141 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:41.141 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:41.141 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:41.141 Network interface enumeration completed. 00:22:41.141 Registering new address record for fe80::300c:2eff:fe16:6eab on nvmf_tgt_if2.*. 00:22:41.141 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:41.141 Registering new address record for fe80::847c:46ff:febf:3ed9 on nvmf_tgt_if.*. 00:22:41.141 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:41.141 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3768451541. 00:22:41.141 02:24:40 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:41.141 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.141 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.141 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.141 02:24:40 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:41.141 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.141 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.141 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.141 02:24:40 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:41.142 02:24:40 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:41.142 02:24:40 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.142 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.142 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.142 02:24:40 -- host/mdns_discovery.sh@68 -- # sort 00:22:41.142 02:24:40 -- host/mdns_discovery.sh@68 -- # xargs 00:22:41.142 02:24:40 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # sort 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # xargs 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # sort 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # xargs 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # sort 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@64 -- # xargs 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:41.400 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.400 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # xargs 00:22:41.400 02:24:40 -- host/mdns_discovery.sh@68 -- # sort 00:22:41.400 02:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.658 [2024-07-15 02:24:40.986651] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:41.658 02:24:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@64 -- # sort 00:22:41.658 02:24:40 -- common/autotest_common.sh@10 -- # set +x 00:22:41.658 02:24:40 -- host/mdns_discovery.sh@64 -- # xargs 00:22:41.658 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 [2024-07-15 02:24:41.056092] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 [2024-07-15 02:24:41.096008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:41.659 02:24:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.659 02:24:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.659 [2024-07-15 02:24:41.107956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:41.659 02:24:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97434 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:41.659 02:24:41 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:42.621 [2024-07-15 02:24:41.886658] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:42.621 Established under name 'CDC' 00:22:42.879 [2024-07-15 02:24:42.286668] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:42.879 [2024-07-15 02:24:42.286712] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:42.879 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:42.879 cookie is 0 00:22:42.879 is_local: 1 00:22:42.879 our_own: 0 00:22:42.879 wide_area: 0 00:22:42.879 multicast: 1 00:22:42.879 cached: 1 00:22:42.879 [2024-07-15 02:24:42.386652] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:42.879 [2024-07-15 02:24:42.386695] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:42.879 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:42.879 cookie is 0 00:22:42.879 is_local: 1 00:22:42.879 our_own: 0 00:22:42.879 wide_area: 0 00:22:42.879 multicast: 1 00:22:42.879 cached: 1 00:22:43.828 [2024-07-15 02:24:43.291588] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:43.828 [2024-07-15 02:24:43.291660] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:43.828 [2024-07-15 02:24:43.291681] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:43.828 [2024-07-15 02:24:43.379723] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:44.086 [2024-07-15 02:24:43.391426] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:44.086 [2024-07-15 02:24:43.391450] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:44.086 [2024-07-15 02:24:43.391481] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:44.086 [2024-07-15 02:24:43.443203] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:44.086 [2024-07-15 02:24:43.443233] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:44.086 [2024-07-15 02:24:43.477332] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:44.086 [2024-07-15 02:24:43.532239] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:44.086 [2024-07-15 02:24:43.532272] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:46.613 02:24:46 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:22:46.613 02:24:46 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:46.613 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.613 02:24:46 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:46.613 02:24:46 -- host/mdns_discovery.sh@80 -- # sort 00:22:46.613 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.613 02:24:46 -- host/mdns_discovery.sh@80 -- # xargs 00:22:46.613 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.871 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.871 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@76 -- # sort 00:22:46.871 02:24:46 -- host/mdns_discovery.sh@76 -- # xargs 00:22:46.872 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:46.872 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:46.872 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@68 -- # sort 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@68 -- # xargs 00:22:46.872 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@64 -- # sort 00:22:46.872 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.872 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@64 -- # xargs 00:22:46.872 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.872 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.872 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.872 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:46.872 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:46.872 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:46.872 02:24:46 -- host/mdns_discovery.sh@72 -- # xargs 00:22:46.872 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:47.130 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:47.130 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:47.130 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.130 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:22:47.130 02:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.130 02:24:46 -- common/autotest_common.sh@10 -- # set +x 00:22:47.130 02:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.130 02:24:46 -- host/mdns_discovery.sh@139 -- # sleep 1 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.066 02:24:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.066 02:24:47 -- common/autotest_common.sh@10 -- # set +x 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@64 -- # sort 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@64 -- # xargs 00:22:48.066 02:24:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:48.066 02:24:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.066 02:24:47 -- common/autotest_common.sh@10 -- # set +x 00:22:48.066 02:24:47 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:48.066 02:24:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:48.324 02:24:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.324 02:24:47 -- common/autotest_common.sh@10 -- # set +x 00:22:48.324 [2024-07-15 02:24:47.651205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.324 [2024-07-15 02:24:47.652287] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:48.324 [2024-07-15 02:24:47.652326] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.324 [2024-07-15 02:24:47.652362] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:48.324 [2024-07-15 02:24:47.652376] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:48.324 02:24:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:22:48.324 02:24:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:48.324 02:24:47 -- common/autotest_common.sh@10 -- # set +x 00:22:48.324 [2024-07-15 02:24:47.659099] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:48.324 [2024-07-15 02:24:47.659285] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:48.324 [2024-07-15 02:24:47.659331] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:48.324 02:24:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:48.324 02:24:47 -- host/mdns_discovery.sh@149 -- # sleep 1 00:22:48.324 [2024-07-15 02:24:47.790371] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:22:48.324 [2024-07-15 02:24:47.790592] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:22:48.324 [2024-07-15 02:24:47.854714] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:48.324 [2024-07-15 02:24:47.854744] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:48.324 [2024-07-15 02:24:47.854752] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:48.324 [2024-07-15 02:24:47.854772] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.324 [2024-07-15 02:24:47.854820] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:48.324 [2024-07-15 02:24:47.854846] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:48.324 [2024-07-15 02:24:47.854852] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:48.324 [2024-07-15 02:24:47.854867] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:48.583 [2024-07-15 02:24:47.900469] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:48.583 [2024-07-15 02:24:47.900496] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:48.583 [2024-07-15 02:24:47.900536] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:48.583 [2024-07-15 02:24:47.900544] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:49.149 02:24:48 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:22:49.149 02:24:48 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.149 02:24:48 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:49.149 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.149 02:24:48 -- host/mdns_discovery.sh@68 -- # sort 00:22:49.149 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.149 02:24:48 -- host/mdns_discovery.sh@68 -- # xargs 00:22:49.149 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:49.407 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@64 -- # sort 00:22:49.407 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@64 -- # xargs 00:22:49.407 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:49.407 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.407 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # xargs 00:22:49.407 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.407 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@72 -- # xargs 00:22:49.407 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.407 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:49.407 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.407 02:24:48 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:49.407 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.407 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.408 02:24:48 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:49.408 02:24:48 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:49.408 02:24:48 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:22:49.408 02:24:48 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:49.408 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.408 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.408 [2024-07-15 02:24:48.964135] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:49.408 [2024-07-15 02:24:48.964169] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.408 [2024-07-15 02:24:48.964201] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:49.408 [2024-07-15 02:24:48.964214] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:49.668 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.668 02:24:48 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:49.668 02:24:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.668 02:24:48 -- common/autotest_common.sh@10 -- # set +x 00:22:49.668 [2024-07-15 02:24:48.971622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.971692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.971708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.971728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.971738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.971747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.971756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.971765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.668 [2024-07-15 02:24:48.976143] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:49.668 [2024-07-15 02:24:48.976215] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:49.668 02:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.668 02:24:48 -- host/mdns_discovery.sh@162 -- # sleep 1 00:22:49.668 [2024-07-15 02:24:48.981564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.668 [2024-07-15 02:24:48.985780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.985815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.985839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.985849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.985859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.985868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.985878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.668 [2024-07-15 02:24:48.985887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.668 [2024-07-15 02:24:48.985896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.668 [2024-07-15 02:24:48.991584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.668 [2024-07-15 02:24:48.991776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.668 [2024-07-15 02:24:48.991848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.668 [2024-07-15 02:24:48.991864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.668 [2024-07-15 02:24:48.991875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.668 [2024-07-15 02:24:48.991892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.668 [2024-07-15 02:24:48.991907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.668 [2024-07-15 02:24:48.991916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.668 [2024-07-15 02:24:48.991927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.668 [2024-07-15 02:24:48.991943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.668 [2024-07-15 02:24:48.995742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.668 [2024-07-15 02:24:49.001699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.668 [2024-07-15 02:24:49.001814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.668 [2024-07-15 02:24:49.001890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.668 [2024-07-15 02:24:49.001906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.668 [2024-07-15 02:24:49.001916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.668 [2024-07-15 02:24:49.001932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.668 [2024-07-15 02:24:49.001946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.001955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.001964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.669 [2024-07-15 02:24:49.001978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.005767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.669 [2024-07-15 02:24:49.005890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.005935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.005951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.669 [2024-07-15 02:24:49.005961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.005977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.005991] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.006000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.006009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.669 [2024-07-15 02:24:49.006022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.011768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.669 [2024-07-15 02:24:49.011879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.011921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.011936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.669 [2024-07-15 02:24:49.011945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.011960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.011973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.011981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.011989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.669 [2024-07-15 02:24:49.012002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.015832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.669 [2024-07-15 02:24:49.015926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.015967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.015982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.669 [2024-07-15 02:24:49.015991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.016005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.016018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.016042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.016051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.669 [2024-07-15 02:24:49.016064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.021873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.669 [2024-07-15 02:24:49.021961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.022007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.022022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.669 [2024-07-15 02:24:49.022033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.022049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.022063] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.022072] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.022080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.669 [2024-07-15 02:24:49.022094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.025887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.669 [2024-07-15 02:24:49.025974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.026019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.026035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.669 [2024-07-15 02:24:49.026045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.026061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.026074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.026082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.026091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.669 [2024-07-15 02:24:49.026104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.031929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.669 [2024-07-15 02:24:49.032039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.032081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.032096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.669 [2024-07-15 02:24:49.032106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.032120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.032133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.032141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.032149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.669 [2024-07-15 02:24:49.032162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.035941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.669 [2024-07-15 02:24:49.036034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.036076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.036090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.669 [2024-07-15 02:24:49.036099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.036114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.036126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.036134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.036142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.669 [2024-07-15 02:24:49.036171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.042015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.669 [2024-07-15 02:24:49.042099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.042156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.042171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.669 [2024-07-15 02:24:49.042181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.042196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.042209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.042217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.042226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.669 [2024-07-15 02:24:49.042272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.045993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.669 [2024-07-15 02:24:49.046124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.046167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.046182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.669 [2024-07-15 02:24:49.046191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.046206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.046219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.046227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.669 [2024-07-15 02:24:49.046235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.669 [2024-07-15 02:24:49.046248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.669 [2024-07-15 02:24:49.052087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.669 [2024-07-15 02:24:49.052200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.052244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.669 [2024-07-15 02:24:49.052260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.669 [2024-07-15 02:24:49.052270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.669 [2024-07-15 02:24:49.052285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.669 [2024-07-15 02:24:49.052316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.669 [2024-07-15 02:24:49.052326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.052335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.670 [2024-07-15 02:24:49.052348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.056081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.670 [2024-07-15 02:24:49.056191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.056235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.056250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.670 [2024-07-15 02:24:49.056261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.056275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.056289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.056297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.056306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.670 [2024-07-15 02:24:49.056319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.062185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.670 [2024-07-15 02:24:49.062294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.062336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.062351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.670 [2024-07-15 02:24:49.062361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.062375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.062404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.062413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.062421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.670 [2024-07-15 02:24:49.062434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.066165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.670 [2024-07-15 02:24:49.066298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.066342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.066357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.670 [2024-07-15 02:24:49.066367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.066381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.066438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.066483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.066492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.670 [2024-07-15 02:24:49.066506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.072254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.670 [2024-07-15 02:24:49.072368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.072409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.072424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.670 [2024-07-15 02:24:49.072433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.072449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.072478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.072488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.072496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.670 [2024-07-15 02:24:49.072509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.076266] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.670 [2024-07-15 02:24:49.076375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.076416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.076447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.670 [2024-07-15 02:24:49.076456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.076471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.076501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.076510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.076519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.670 [2024-07-15 02:24:49.076532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.082338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.670 [2024-07-15 02:24:49.082447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.082520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.082536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.670 [2024-07-15 02:24:49.082546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.082561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.082601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.082611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.082620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.670 [2024-07-15 02:24:49.082634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.086346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.670 [2024-07-15 02:24:49.086454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.086528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.086543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.670 [2024-07-15 02:24:49.086553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.086569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.086609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.086621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.086630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.670 [2024-07-15 02:24:49.086657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.092405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.670 [2024-07-15 02:24:49.092514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.092556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.092571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.670 [2024-07-15 02:24:49.092581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.092596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.092637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.092648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.092657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.670 [2024-07-15 02:24:49.092686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.096410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.670 [2024-07-15 02:24:49.096519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.096560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.096575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.670 [2024-07-15 02:24:49.096584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.096599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.096640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.670 [2024-07-15 02:24:49.096651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.670 [2024-07-15 02:24:49.096659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.670 [2024-07-15 02:24:49.096672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.670 [2024-07-15 02:24:49.102487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.670 [2024-07-15 02:24:49.102601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.102659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.670 [2024-07-15 02:24:49.102676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x905f90 with addr=10.0.0.2, port=4420 00:22:49.670 [2024-07-15 02:24:49.102686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x905f90 is same with the state(5) to be set 00:22:49.670 [2024-07-15 02:24:49.102701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x905f90 (9): Bad file descriptor 00:22:49.670 [2024-07-15 02:24:49.102738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.671 [2024-07-15 02:24:49.102748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.671 [2024-07-15 02:24:49.102757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.671 [2024-07-15 02:24:49.102786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.671 [2024-07-15 02:24:49.106504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:22:49.671 [2024-07-15 02:24:49.106625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.671 [2024-07-15 02:24:49.106671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.671 [2024-07-15 02:24:49.106687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b61d0 with addr=10.0.0.3, port=4420 00:22:49.671 [2024-07-15 02:24:49.106697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b61d0 is same with the state(5) to be set 00:22:49.671 [2024-07-15 02:24:49.106712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b61d0 (9): Bad file descriptor 00:22:49.671 [2024-07-15 02:24:49.106751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:22:49.671 [2024-07-15 02:24:49.106761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:22:49.671 [2024-07-15 02:24:49.106770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:22:49.671 [2024-07-15 02:24:49.106799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.671 [2024-07-15 02:24:49.107749] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:49.671 [2024-07-15 02:24:49.107791] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:49.671 [2024-07-15 02:24:49.107811] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.671 [2024-07-15 02:24:49.107846] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:22:49.671 [2024-07-15 02:24:49.107861] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:49.671 [2024-07-15 02:24:49.107874] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:49.671 [2024-07-15 02:24:49.193841] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:49.671 [2024-07-15 02:24:49.193905] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:50.604 02:24:49 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:22:50.604 02:24:49 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.604 02:24:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.604 02:24:49 -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 02:24:49 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:50.604 02:24:49 -- host/mdns_discovery.sh@68 -- # sort 00:22:50.604 02:24:49 -- host/mdns_discovery.sh@68 -- # xargs 00:22:50.604 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.604 02:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.604 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@64 -- # xargs 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@64 -- # sort 00:22:50.604 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:22:50.604 02:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.604 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:50.604 02:24:50 -- host/mdns_discovery.sh@72 -- # xargs 00:22:50.604 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:22:50.861 02:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.861 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@72 -- # xargs 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:22:50.861 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:50.861 02:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.861 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:50.861 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:50.861 02:24:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:50.861 02:24:50 -- common/autotest_common.sh@10 -- # set +x 00:22:50.861 02:24:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:50.861 02:24:50 -- host/mdns_discovery.sh@172 -- # sleep 1 00:22:50.861 [2024-07-15 02:24:50.286675] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:51.804 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.804 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@80 -- # sort 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@80 -- # xargs 00:22:51.804 02:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.804 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.804 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@68 -- # xargs 00:22:51.804 02:24:51 -- host/mdns_discovery.sh@68 -- # sort 00:22:51.804 02:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:52.062 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@64 -- # sort 00:22:52.062 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@64 -- # xargs 00:22:52.062 02:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:22:52.062 02:24:51 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:22:52.063 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.063 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:22:52.063 02:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:52.063 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.063 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.063 02:24:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:52.063 02:24:51 -- common/autotest_common.sh@640 -- # local es=0 00:22:52.063 02:24:51 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:52.063 02:24:51 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:52.063 02:24:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:52.063 02:24:51 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:52.063 02:24:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:52.063 02:24:51 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:22:52.063 02:24:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.063 02:24:51 -- common/autotest_common.sh@10 -- # set +x 00:22:52.063 [2024-07-15 02:24:51.515749] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:22:52.063 2024/07/15 02:24:51 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:52.063 request: 00:22:52.063 { 00:22:52.063 "method": "bdev_nvme_start_mdns_discovery", 00:22:52.063 "params": { 00:22:52.063 "name": "mdns", 00:22:52.063 "svcname": "_nvme-disc._http", 00:22:52.063 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:52.063 } 00:22:52.063 } 00:22:52.063 Got JSON-RPC error response 00:22:52.063 GoRPCClient: error on JSON-RPC call 00:22:52.063 02:24:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:52.063 02:24:51 -- common/autotest_common.sh@643 -- # es=1 00:22:52.063 02:24:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:52.063 02:24:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:52.063 02:24:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:52.063 02:24:51 -- host/mdns_discovery.sh@183 -- # sleep 5 00:22:52.629 [2024-07-15 02:24:51.900487] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:52.629 [2024-07-15 02:24:52.000483] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:52.629 [2024-07-15 02:24:52.100498] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:52.629 [2024-07-15 02:24:52.100538] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:52.629 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:52.629 cookie is 0 00:22:52.629 is_local: 1 00:22:52.629 our_own: 0 00:22:52.629 wide_area: 0 00:22:52.629 multicast: 1 00:22:52.629 cached: 1 00:22:52.887 [2024-07-15 02:24:52.200494] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:52.887 [2024-07-15 02:24:52.200521] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:52.887 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:52.887 cookie is 0 00:22:52.887 is_local: 1 00:22:52.887 our_own: 0 00:22:52.887 wide_area: 0 00:22:52.887 multicast: 1 00:22:52.887 cached: 1 00:22:53.822 [2024-07-15 02:24:53.113353] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:53.822 [2024-07-15 02:24:53.113384] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:53.822 [2024-07-15 02:24:53.113402] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:53.822 [2024-07-15 02:24:53.199455] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:22:53.822 [2024-07-15 02:24:53.213069] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:53.822 [2024-07-15 02:24:53.213110] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:53.822 [2024-07-15 02:24:53.213145] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:53.822 [2024-07-15 02:24:53.269259] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:53.822 [2024-07-15 02:24:53.269292] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:22:53.822 [2024-07-15 02:24:53.299392] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:22:53.822 [2024-07-15 02:24:53.358551] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:53.822 [2024-07-15 02:24:53.358587] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:22:57.102 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.102 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@80 -- # sort 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@80 -- # xargs 00:22:57.102 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@76 -- # xargs 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@76 -- # sort 00:22:57.102 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.102 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.102 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.102 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.102 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.102 02:24:56 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.361 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:57.361 02:24:56 -- common/autotest_common.sh@640 -- # local es=0 00:22:57.361 02:24:56 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:57.361 02:24:56 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:57.361 02:24:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:57.361 02:24:56 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:57.361 02:24:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:57.361 02:24:56 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:57.361 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.361 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.361 [2024-07-15 02:24:56.707791] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:22:57.361 2024/07/15 02:24:56 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:22:57.361 request: 00:22:57.361 { 00:22:57.361 "method": "bdev_nvme_start_mdns_discovery", 00:22:57.361 "params": { 00:22:57.361 "name": "cdc", 00:22:57.361 "svcname": "_nvme-disc._tcp", 00:22:57.361 "hostnqn": "nqn.2021-12.io.spdk:test" 00:22:57.361 } 00:22:57.361 } 00:22:57.361 Got JSON-RPC error response 00:22:57.361 GoRPCClient: error on JSON-RPC call 00:22:57.361 02:24:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:57.361 02:24:56 -- common/autotest_common.sh@643 -- # es=1 00:22:57.361 02:24:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:57.361 02:24:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:57.361 02:24:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:57.361 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:22:57.361 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@76 -- # sort 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@76 -- # xargs 00:22:57.361 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.361 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.361 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@64 -- # sort 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@64 -- # xargs 00:22:57.361 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:22:57.361 02:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.361 02:24:56 -- common/autotest_common.sh@10 -- # set +x 00:22:57.361 02:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@197 -- # kill 97348 00:22:57.361 02:24:56 -- host/mdns_discovery.sh@200 -- # wait 97348 00:22:57.620 [2024-07-15 02:24:56.941734] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:22:57.620 02:24:57 -- host/mdns_discovery.sh@201 -- # kill 97434 00:22:57.620 Got SIGTERM, quitting. 00:22:57.620 02:24:57 -- host/mdns_discovery.sh@202 -- # kill 97377 00:22:57.620 02:24:57 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:22:57.620 02:24:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:57.620 Got SIGTERM, quitting. 00:22:57.620 02:24:57 -- nvmf/common.sh@116 -- # sync 00:22:57.620 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:57.620 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:57.620 avahi-daemon 0.8 exiting. 00:22:57.620 02:24:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:57.620 02:24:57 -- nvmf/common.sh@119 -- # set +e 00:22:57.620 02:24:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:57.620 02:24:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:57.620 rmmod nvme_tcp 00:22:57.620 rmmod nvme_fabrics 00:22:57.620 rmmod nvme_keyring 00:22:57.620 02:24:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:57.620 02:24:57 -- nvmf/common.sh@123 -- # set -e 00:22:57.620 02:24:57 -- nvmf/common.sh@124 -- # return 0 00:22:57.620 02:24:57 -- nvmf/common.sh@477 -- # '[' -n 97298 ']' 00:22:57.620 02:24:57 -- nvmf/common.sh@478 -- # killprocess 97298 00:22:57.620 02:24:57 -- common/autotest_common.sh@926 -- # '[' -z 97298 ']' 00:22:57.620 02:24:57 -- common/autotest_common.sh@930 -- # kill -0 97298 00:22:57.620 02:24:57 -- common/autotest_common.sh@931 -- # uname 00:22:57.620 02:24:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:57.620 02:24:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97298 00:22:57.620 killing process with pid 97298 00:22:57.620 02:24:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:57.620 02:24:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:57.620 02:24:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97298' 00:22:57.620 02:24:57 -- common/autotest_common.sh@945 -- # kill 97298 00:22:57.620 02:24:57 -- common/autotest_common.sh@950 -- # wait 97298 00:22:57.879 02:24:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:57.879 02:24:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:57.879 02:24:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:57.879 02:24:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.879 02:24:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:57.879 02:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.879 02:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.879 02:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.879 02:24:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:57.879 ************************************ 00:22:57.879 END TEST nvmf_mdns_discovery 00:22:57.879 00:22:57.879 real 0m20.576s 00:22:57.879 user 0m40.360s 00:22:57.879 sys 0m1.997s 00:22:57.879 02:24:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.879 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:57.879 ************************************ 00:22:58.140 02:24:57 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:22:58.140 02:24:57 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:58.140 02:24:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:58.140 02:24:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:58.140 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.140 ************************************ 00:22:58.140 START TEST nvmf_multipath 00:22:58.140 ************************************ 00:22:58.140 02:24:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:58.140 * Looking for test storage... 00:22:58.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:58.140 02:24:57 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.141 02:24:57 -- nvmf/common.sh@7 -- # uname -s 00:22:58.141 02:24:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.141 02:24:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.141 02:24:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.141 02:24:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.141 02:24:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.141 02:24:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.141 02:24:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.141 02:24:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.141 02:24:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.141 02:24:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:58.141 02:24:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:22:58.141 02:24:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.141 02:24:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.141 02:24:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.141 02:24:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.141 02:24:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.141 02:24:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.141 02:24:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.141 02:24:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.141 02:24:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.141 02:24:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.141 02:24:57 -- paths/export.sh@5 -- # export PATH 00:22:58.141 02:24:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.141 02:24:57 -- nvmf/common.sh@46 -- # : 0 00:22:58.141 02:24:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:58.141 02:24:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:58.141 02:24:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:58.141 02:24:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.141 02:24:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.141 02:24:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:58.141 02:24:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:58.141 02:24:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:58.141 02:24:57 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:58.141 02:24:57 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:58.141 02:24:57 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.141 02:24:57 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:58.141 02:24:57 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.141 02:24:57 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:58.141 02:24:57 -- host/multipath.sh@30 -- # nvmftestinit 00:22:58.141 02:24:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:58.141 02:24:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.141 02:24:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:58.141 02:24:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:58.141 02:24:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:58.141 02:24:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.141 02:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.141 02:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.141 02:24:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:58.141 02:24:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:58.141 02:24:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.141 02:24:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.141 02:24:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:58.141 02:24:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:58.141 02:24:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.141 02:24:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.141 02:24:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.141 02:24:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.141 02:24:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.141 02:24:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.141 02:24:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.141 02:24:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.141 02:24:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:58.141 02:24:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:58.141 Cannot find device "nvmf_tgt_br" 00:22:58.141 02:24:57 -- nvmf/common.sh@154 -- # true 00:22:58.141 02:24:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.141 Cannot find device "nvmf_tgt_br2" 00:22:58.141 02:24:57 -- nvmf/common.sh@155 -- # true 00:22:58.141 02:24:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:58.141 02:24:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:58.141 Cannot find device "nvmf_tgt_br" 00:22:58.141 02:24:57 -- nvmf/common.sh@157 -- # true 00:22:58.141 02:24:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:58.141 Cannot find device "nvmf_tgt_br2" 00:22:58.141 02:24:57 -- nvmf/common.sh@158 -- # true 00:22:58.141 02:24:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:58.141 02:24:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:58.400 02:24:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.400 02:24:57 -- nvmf/common.sh@161 -- # true 00:22:58.400 02:24:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.400 02:24:57 -- nvmf/common.sh@162 -- # true 00:22:58.400 02:24:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.400 02:24:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.400 02:24:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.400 02:24:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.400 02:24:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.400 02:24:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.400 02:24:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.400 02:24:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.400 02:24:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:58.400 02:24:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:58.400 02:24:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:58.400 02:24:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:58.400 02:24:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:58.400 02:24:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.400 02:24:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.400 02:24:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.400 02:24:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:58.400 02:24:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:58.400 02:24:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.400 02:24:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.400 02:24:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.400 02:24:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.401 02:24:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.401 02:24:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:58.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:58.401 00:22:58.401 --- 10.0.0.2 ping statistics --- 00:22:58.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.401 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:58.401 02:24:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:58.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:22:58.401 00:22:58.401 --- 10.0.0.3 ping statistics --- 00:22:58.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.401 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:58.401 02:24:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:22:58.401 00:22:58.401 --- 10.0.0.1 ping statistics --- 00:22:58.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.401 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:22:58.401 02:24:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.401 02:24:57 -- nvmf/common.sh@421 -- # return 0 00:22:58.401 02:24:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:58.401 02:24:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.401 02:24:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:58.401 02:24:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:58.401 02:24:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.401 02:24:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:58.401 02:24:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:58.401 02:24:57 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:58.401 02:24:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:58.401 02:24:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:58.401 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.401 02:24:57 -- nvmf/common.sh@469 -- # nvmfpid=97944 00:22:58.401 02:24:57 -- nvmf/common.sh@470 -- # waitforlisten 97944 00:22:58.401 02:24:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:58.401 02:24:57 -- common/autotest_common.sh@819 -- # '[' -z 97944 ']' 00:22:58.401 02:24:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.401 02:24:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:58.401 02:24:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.401 02:24:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:58.401 02:24:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.660 [2024-07-15 02:24:57.961999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:58.660 [2024-07-15 02:24:57.962899] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.660 [2024-07-15 02:24:58.103430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:58.660 [2024-07-15 02:24:58.192351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:58.660 [2024-07-15 02:24:58.192827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.660 [2024-07-15 02:24:58.192852] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.660 [2024-07-15 02:24:58.192863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.660 [2024-07-15 02:24:58.192960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.660 [2024-07-15 02:24:58.193017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.595 02:24:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:59.595 02:24:58 -- common/autotest_common.sh@852 -- # return 0 00:22:59.595 02:24:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:59.595 02:24:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:59.595 02:24:58 -- common/autotest_common.sh@10 -- # set +x 00:22:59.595 02:24:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.595 02:24:58 -- host/multipath.sh@33 -- # nvmfapp_pid=97944 00:22:59.595 02:24:58 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:59.595 [2024-07-15 02:24:59.149964] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.853 02:24:59 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:59.853 Malloc0 00:23:00.111 02:24:59 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:00.111 02:24:59 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.369 02:24:59 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.627 [2024-07-15 02:25:00.066937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.627 02:25:00 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:00.885 [2024-07-15 02:25:00.287078] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.885 02:25:00 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:00.885 02:25:00 -- host/multipath.sh@44 -- # bdevperf_pid=98051 00:23:00.885 02:25:00 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.885 02:25:00 -- host/multipath.sh@47 -- # waitforlisten 98051 /var/tmp/bdevperf.sock 00:23:00.885 02:25:00 -- common/autotest_common.sh@819 -- # '[' -z 98051 ']' 00:23:00.885 02:25:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.885 02:25:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:00.885 02:25:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.885 02:25:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:00.885 02:25:00 -- common/autotest_common.sh@10 -- # set +x 00:23:02.261 02:25:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.261 02:25:01 -- common/autotest_common.sh@852 -- # return 0 00:23:02.261 02:25:01 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:02.261 02:25:01 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:02.519 Nvme0n1 00:23:02.519 02:25:02 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:03.084 Nvme0n1 00:23:03.084 02:25:02 -- host/multipath.sh@78 -- # sleep 1 00:23:03.084 02:25:02 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.018 02:25:03 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:04.018 02:25:03 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:04.275 02:25:03 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:04.533 02:25:03 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:04.533 02:25:03 -- host/multipath.sh@65 -- # dtrace_pid=98138 00:23:04.533 02:25:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:04.533 02:25:03 -- host/multipath.sh@66 -- # sleep 6 00:23:11.090 02:25:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:11.090 02:25:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:11.090 02:25:10 -- host/multipath.sh@67 -- # active_port=4421 00:23:11.090 02:25:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:11.090 Attaching 4 probes... 00:23:11.090 @path[10.0.0.2, 4421]: 19384 00:23:11.090 @path[10.0.0.2, 4421]: 19898 00:23:11.090 @path[10.0.0.2, 4421]: 19399 00:23:11.090 @path[10.0.0.2, 4421]: 19637 00:23:11.090 @path[10.0.0.2, 4421]: 19489 00:23:11.090 02:25:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:11.090 02:25:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:11.090 02:25:10 -- host/multipath.sh@69 -- # sed -n 1p 00:23:11.090 02:25:10 -- host/multipath.sh@69 -- # port=4421 00:23:11.090 02:25:10 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:11.090 02:25:10 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:11.090 02:25:10 -- host/multipath.sh@72 -- # kill 98138 00:23:11.090 02:25:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:11.090 02:25:10 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:11.090 02:25:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:11.090 02:25:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:11.447 02:25:10 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:11.447 02:25:10 -- host/multipath.sh@65 -- # dtrace_pid=98275 00:23:11.447 02:25:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:11.447 02:25:10 -- host/multipath.sh@66 -- # sleep 6 00:23:18.020 02:25:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:18.020 02:25:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:18.020 02:25:16 -- host/multipath.sh@67 -- # active_port=4420 00:23:18.020 02:25:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.020 Attaching 4 probes... 00:23:18.020 @path[10.0.0.2, 4420]: 19192 00:23:18.020 @path[10.0.0.2, 4420]: 20027 00:23:18.020 @path[10.0.0.2, 4420]: 20364 00:23:18.020 @path[10.0.0.2, 4420]: 20172 00:23:18.020 @path[10.0.0.2, 4420]: 20209 00:23:18.020 02:25:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:18.020 02:25:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:18.020 02:25:16 -- host/multipath.sh@69 -- # sed -n 1p 00:23:18.020 02:25:16 -- host/multipath.sh@69 -- # port=4420 00:23:18.020 02:25:16 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.020 02:25:16 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.020 02:25:16 -- host/multipath.sh@72 -- # kill 98275 00:23:18.020 02:25:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:18.020 02:25:16 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:18.020 02:25:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:18.020 02:25:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.020 02:25:17 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:18.020 02:25:17 -- host/multipath.sh@65 -- # dtrace_pid=98400 00:23:18.020 02:25:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:18.020 02:25:17 -- host/multipath.sh@66 -- # sleep 6 00:23:24.576 02:25:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:24.576 02:25:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:24.576 02:25:23 -- host/multipath.sh@67 -- # active_port=4421 00:23:24.576 02:25:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:24.576 Attaching 4 probes... 00:23:24.576 @path[10.0.0.2, 4421]: 14611 00:23:24.576 @path[10.0.0.2, 4421]: 20558 00:23:24.576 @path[10.0.0.2, 4421]: 20394 00:23:24.576 @path[10.0.0.2, 4421]: 20041 00:23:24.576 @path[10.0.0.2, 4421]: 20069 00:23:24.576 02:25:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:24.576 02:25:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:24.576 02:25:23 -- host/multipath.sh@69 -- # sed -n 1p 00:23:24.576 02:25:23 -- host/multipath.sh@69 -- # port=4421 00:23:24.576 02:25:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.576 02:25:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.576 02:25:23 -- host/multipath.sh@72 -- # kill 98400 00:23:24.576 02:25:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:24.576 02:25:23 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:24.576 02:25:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:24.576 02:25:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:24.834 02:25:24 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:24.834 02:25:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:24.834 02:25:24 -- host/multipath.sh@65 -- # dtrace_pid=98536 00:23:24.834 02:25:24 -- host/multipath.sh@66 -- # sleep 6 00:23:31.389 02:25:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:31.389 02:25:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:31.389 02:25:30 -- host/multipath.sh@67 -- # active_port= 00:23:31.389 02:25:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.389 Attaching 4 probes... 00:23:31.389 00:23:31.389 00:23:31.389 00:23:31.389 00:23:31.389 00:23:31.389 02:25:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:31.389 02:25:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:31.389 02:25:30 -- host/multipath.sh@69 -- # sed -n 1p 00:23:31.389 02:25:30 -- host/multipath.sh@69 -- # port= 00:23:31.389 02:25:30 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:31.389 02:25:30 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:31.389 02:25:30 -- host/multipath.sh@72 -- # kill 98536 00:23:31.389 02:25:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.389 02:25:30 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:31.389 02:25:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:31.389 02:25:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:31.648 02:25:31 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:31.648 02:25:31 -- host/multipath.sh@65 -- # dtrace_pid=98671 00:23:31.648 02:25:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:31.648 02:25:31 -- host/multipath.sh@66 -- # sleep 6 00:23:38.216 02:25:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:38.216 02:25:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:38.216 02:25:37 -- host/multipath.sh@67 -- # active_port=4421 00:23:38.216 02:25:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.216 Attaching 4 probes... 00:23:38.216 @path[10.0.0.2, 4421]: 19306 00:23:38.216 @path[10.0.0.2, 4421]: 19963 00:23:38.216 @path[10.0.0.2, 4421]: 20004 00:23:38.216 @path[10.0.0.2, 4421]: 19524 00:23:38.216 @path[10.0.0.2, 4421]: 19329 00:23:38.216 02:25:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:38.216 02:25:37 -- host/multipath.sh@69 -- # sed -n 1p 00:23:38.216 02:25:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:38.216 02:25:37 -- host/multipath.sh@69 -- # port=4421 00:23:38.216 02:25:37 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.216 02:25:37 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:38.216 02:25:37 -- host/multipath.sh@72 -- # kill 98671 00:23:38.216 02:25:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:38.216 02:25:37 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:38.216 [2024-07-15 02:25:37.551580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.551993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.216 [2024-07-15 02:25:37.552212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 [2024-07-15 02:25:37.552480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6c30 is same with the state(5) to be set 00:23:38.217 02:25:37 -- host/multipath.sh@101 -- # sleep 1 00:23:39.150 02:25:38 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:39.150 02:25:38 -- host/multipath.sh@65 -- # dtrace_pid=98801 00:23:39.150 02:25:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:39.150 02:25:38 -- host/multipath.sh@66 -- # sleep 6 00:23:45.706 02:25:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:45.706 02:25:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:45.706 02:25:44 -- host/multipath.sh@67 -- # active_port=4420 00:23:45.706 02:25:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:45.706 Attaching 4 probes... 00:23:45.706 @path[10.0.0.2, 4420]: 18695 00:23:45.706 @path[10.0.0.2, 4420]: 18943 00:23:45.706 @path[10.0.0.2, 4420]: 18984 00:23:45.706 @path[10.0.0.2, 4420]: 18778 00:23:45.706 @path[10.0.0.2, 4420]: 18795 00:23:45.706 02:25:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:45.706 02:25:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:45.706 02:25:44 -- host/multipath.sh@69 -- # sed -n 1p 00:23:45.706 02:25:44 -- host/multipath.sh@69 -- # port=4420 00:23:45.706 02:25:44 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.706 02:25:44 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:45.706 02:25:44 -- host/multipath.sh@72 -- # kill 98801 00:23:45.706 02:25:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:45.706 02:25:44 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:45.706 [2024-07-15 02:25:45.070014] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:45.706 02:25:45 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:45.964 02:25:45 -- host/multipath.sh@111 -- # sleep 6 00:23:52.523 02:25:51 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:52.523 02:25:51 -- host/multipath.sh@65 -- # dtrace_pid=98995 00:23:52.523 02:25:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97944 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:52.523 02:25:51 -- host/multipath.sh@66 -- # sleep 6 00:23:59.107 02:25:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:59.107 02:25:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:59.107 02:25:57 -- host/multipath.sh@67 -- # active_port=4421 00:23:59.107 02:25:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:59.107 Attaching 4 probes... 00:23:59.107 @path[10.0.0.2, 4421]: 19345 00:23:59.107 @path[10.0.0.2, 4421]: 19806 00:23:59.107 @path[10.0.0.2, 4421]: 19748 00:23:59.107 @path[10.0.0.2, 4421]: 19672 00:23:59.108 @path[10.0.0.2, 4421]: 20721 00:23:59.108 02:25:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:59.108 02:25:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:59.108 02:25:57 -- host/multipath.sh@69 -- # sed -n 1p 00:23:59.108 02:25:57 -- host/multipath.sh@69 -- # port=4421 00:23:59.108 02:25:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:59.108 02:25:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:59.108 02:25:57 -- host/multipath.sh@72 -- # kill 98995 00:23:59.108 02:25:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:59.108 02:25:57 -- host/multipath.sh@114 -- # killprocess 98051 00:23:59.108 02:25:57 -- common/autotest_common.sh@926 -- # '[' -z 98051 ']' 00:23:59.108 02:25:57 -- common/autotest_common.sh@930 -- # kill -0 98051 00:23:59.108 02:25:57 -- common/autotest_common.sh@931 -- # uname 00:23:59.108 02:25:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:59.108 02:25:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98051 00:23:59.108 killing process with pid 98051 00:23:59.108 02:25:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:23:59.108 02:25:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:23:59.108 02:25:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98051' 00:23:59.108 02:25:57 -- common/autotest_common.sh@945 -- # kill 98051 00:23:59.108 02:25:57 -- common/autotest_common.sh@950 -- # wait 98051 00:23:59.108 Connection closed with partial response: 00:23:59.108 00:23:59.108 00:23:59.108 02:25:57 -- host/multipath.sh@116 -- # wait 98051 00:23:59.108 02:25:57 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:59.108 [2024-07-15 02:25:00.347223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:59.108 [2024-07-15 02:25:00.347349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98051 ] 00:23:59.108 [2024-07-15 02:25:00.485210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.108 [2024-07-15 02:25:00.570656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.108 Running I/O for 90 seconds... 00:23:59.108 [2024-07-15 02:25:10.706391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.706570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.706784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.706890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.706971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.707050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.707314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.707354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.707389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.108 [2024-07-15 02:25:10.707424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.108 [2024-07-15 02:25:10.707665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.108 [2024-07-15 02:25:10.707681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.707967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.707982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.109 [2024-07-15 02:25:10.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.109 [2024-07-15 02:25:10.708211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.109 [2024-07-15 02:25:10.708279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.708873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.109 [2024-07-15 02:25:10.708888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.109 [2024-07-15 02:25:10.709331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.709893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.709914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.709928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.710694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.710933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.710969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.711007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.711042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.711078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.711114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.711150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.110 [2024-07-15 02:25:10.711194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.711230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.110 [2024-07-15 02:25:10.711251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.110 [2024-07-15 02:25:10.711266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:10.711948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.711970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.711985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:10.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:10.712283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:17.241203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:17.241293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:17.241327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:17.241359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:17.241391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:17.241422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.111 [2024-07-15 02:25:17.241469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.111 [2024-07-15 02:25:17.241491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.111 [2024-07-15 02:25:17.241504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.241566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.241699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.241735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.241967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.241983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.242100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.242467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.242542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.112 [2024-07-15 02:25:17.242717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.242962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.242991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.243021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.243043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.243057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.243079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.243092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.112 [2024-07-15 02:25:17.243114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.112 [2024-07-15 02:25:17.243129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.243151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.243165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.243187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.243201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.243907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.243937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.243963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.243977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-15 02:25:17.244647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.113 [2024-07-15 02:25:17.244673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.113 [2024-07-15 02:25:17.244687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.244739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.244776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.244813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.244849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.244893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.244931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.244970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.244993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.245947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.245977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.245992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.114 [2024-07-15 02:25:17.246038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.246067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.246082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.246120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.246151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.246195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.246224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.114 [2024-07-15 02:25:17.246251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-15 02:25:17.246265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:17.246292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:17.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:17.246332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.115 [2024-07-15 02:25:17.246346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.262953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.115 [2024-07-15 02:25:24.263052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.115 [2024-07-15 02:25:24.263121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.115 [2024-07-15 02:25:24.263189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.115 [2024-07-15 02:25:24.263408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.263967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.263982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.115 [2024-07-15 02:25:24.264270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.115 [2024-07-15 02:25:24.264285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.264320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.264365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.264402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.264438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.264474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.264520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.264555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.264575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.264590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.265162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.265239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.116 [2024-07-15 02:25:24.265273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.265975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.265996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.266010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.266040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.266055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.116 [2024-07-15 02:25:24.266076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.116 [2024-07-15 02:25:24.266091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.266456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.266960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.266981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.267144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.117 [2024-07-15 02:25:24.267177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.117 [2024-07-15 02:25:24.267289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.117 [2024-07-15 02:25:24.267313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.267461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.267703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.267774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.267873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.267888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.268600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.268676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.268713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.268749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.268785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.268821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.118 [2024-07-15 02:25:24.268888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.118 [2024-07-15 02:25:24.268924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.118 [2024-07-15 02:25:24.268945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.268960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.268981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.119 [2024-07-15 02:25:24.269434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.269977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.269992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.270028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.270063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.270134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.119 [2024-07-15 02:25:24.270184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.119 [2024-07-15 02:25:24.270204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.270217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.270257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.270292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.270326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.270359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.270393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.270427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.270466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.270984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.120 [2024-07-15 02:25:24.271338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.120 [2024-07-15 02:25:24.271829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.120 [2024-07-15 02:25:24.271849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.271864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.271885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.271899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.271920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.271934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.271955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.271969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.271990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.272019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.272068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.272101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.272144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.272380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.272413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.272433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.280921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.280980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.121 [2024-07-15 02:25:24.280999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.121 [2024-07-15 02:25:24.281512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.121 [2024-07-15 02:25:24.281532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.281694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.281731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.281958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.281979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.282038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.282320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.282372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.282385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.283296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.283424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.122 [2024-07-15 02:25:24.283488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.122 [2024-07-15 02:25:24.283520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.122 [2024-07-15 02:25:24.283538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.283648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.283814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.283849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.283884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.283927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.283962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.283983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.123 [2024-07-15 02:25:24.284076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.123 [2024-07-15 02:25:24.284729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.123 [2024-07-15 02:25:24.284744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.284778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.284813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.284853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.284925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.284960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.284996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.285025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.285167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.285266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.285332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.124 [2024-07-15 02:25:24.285977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.286004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.286025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.286048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.286063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.286096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.286112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.286133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.286148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.124 [2024-07-15 02:25:24.286183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.124 [2024-07-15 02:25:24.286198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.286805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.286841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.286877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.286912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.286948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.286969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.286983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.287042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.287077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.125 [2024-07-15 02:25:24.287181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.125 [2024-07-15 02:25:24.287345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.125 [2024-07-15 02:25:24.287367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.126 [2024-07-15 02:25:24.287865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.126 [2024-07-15 02:25:24.287901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.287975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.287989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.126 [2024-07-15 02:25:24.288204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.126 [2024-07-15 02:25:24.288433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.126 [2024-07-15 02:25:24.288467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.126 [2024-07-15 02:25:24.288502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.126 [2024-07-15 02:25:24.288528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.288543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.289904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.289962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.289976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.127 [2024-07-15 02:25:24.290128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.127 [2024-07-15 02:25:24.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.127 [2024-07-15 02:25:24.290410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.290429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.290451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.290471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.290491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.290511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.290524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.290544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.300916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.300938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.300952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.301627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.128 [2024-07-15 02:25:24.301706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.301742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.301779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.301814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.128 [2024-07-15 02:25:24.301849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.128 [2024-07-15 02:25:24.301885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.301902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.301923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.301941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.301962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.301976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.301997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.129 [2024-07-15 02:25:24.302793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.129 [2024-07-15 02:25:24.302829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.129 [2024-07-15 02:25:24.302850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.302864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.302885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.130 [2024-07-15 02:25:24.302899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.302935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.302949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.302993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.130 [2024-07-15 02:25:24.303541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.130 [2024-07-15 02:25:24.303574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.130 [2024-07-15 02:25:24.303891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.303963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.303992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.304011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.304024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.130 [2024-07-15 02:25:24.304043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.130 [2024-07-15 02:25:24.304057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.304090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.304123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.304156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.304190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.304831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.304910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.304974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.304994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.131 [2024-07-15 02:25:24.305758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.131 [2024-07-15 02:25:24.305859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.131 [2024-07-15 02:25:24.305886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.305908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.305923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.305944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.305959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.305980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.305995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.306957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.306976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.132 [2024-07-15 02:25:24.306990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.307009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.132 [2024-07-15 02:25:24.307023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.132 [2024-07-15 02:25:24.307560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.307658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.307695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.307968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.307988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.133 [2024-07-15 02:25:24.308750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.133 [2024-07-15 02:25:24.308806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.133 [2024-07-15 02:25:24.308820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.308847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.134 [2024-07-15 02:25:24.308862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.308883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.308897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.308918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.308936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.308957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.308971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.308992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.134 [2024-07-15 02:25:24.309510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.134 [2024-07-15 02:25:24.309543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.134 [2024-07-15 02:25:24.309848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.134 [2024-07-15 02:25:24.309950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.134 [2024-07-15 02:25:24.309965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.309985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.309999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.310075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.310779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.310963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.310982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.310995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.135 [2024-07-15 02:25:24.311471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.135 [2024-07-15 02:25:24.311490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.135 [2024-07-15 02:25:24.311504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.136 [2024-07-15 02:25:24.311716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.311971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.311990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.136 [2024-07-15 02:25:24.312492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.136 [2024-07-15 02:25:24.312592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.136 [2024-07-15 02:25:24.312638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.136 [2024-07-15 02:25:24.312656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.312691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.312726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.312761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.312795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.312829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.312863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.312884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.312906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.313508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.313584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.137 [2024-07-15 02:25:24.313617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.313996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.314331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.314357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.137 [2024-07-15 02:25:24.321578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:59.137 [2024-07-15 02:25:24.321660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.321903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.321939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.321959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.322022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.322663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.138 [2024-07-15 02:25:24.322696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:59.138 [2024-07-15 02:25:24.322849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.138 [2024-07-15 02:25:24.322863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.322882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.322896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.322915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.322936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.322957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.322971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.322990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.323171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.323536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.323723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.323877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.323954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.323979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.323993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.324108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.324271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.139 [2024-07-15 02:25:24.324309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.139 [2024-07-15 02:25:24.324347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:59.139 [2024-07-15 02:25:24.324371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.140 [2024-07-15 02:25:24.324538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.324975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.324988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.140 [2024-07-15 02:25:24.325340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:59.140 [2024-07-15 02:25:24.325364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.325378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.325455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.325493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.325745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.141 [2024-07-15 02:25:24.325822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:24.325982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:24.326004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.552984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.552998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.141 [2024-07-15 02:25:37.553395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.141 [2024-07-15 02:25:37.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.553715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.553969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.553984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.553998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.554026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.554112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.554179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.142 [2024-07-15 02:25:37.554301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.142 [2024-07-15 02:25:37.554359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.142 [2024-07-15 02:25:37.554374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.554936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.554980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.554993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.555021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.555050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.555173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.143 [2024-07-15 02:25:37.555265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.143 [2024-07-15 02:25:37.555368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.143 [2024-07-15 02:25:37.555381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.555793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.555976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.555992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:59.144 [2024-07-15 02:25:37.556362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.144 [2024-07-15 02:25:37.556378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.144 [2024-07-15 02:25:37.556392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.145 [2024-07-15 02:25:37.556692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5d020 is same with the state(5) to be set 00:23:59.145 [2024-07-15 02:25:37.556725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:59.145 [2024-07-15 02:25:37.556736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:59.145 [2024-07-15 02:25:37.556748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98752 len:8 PRP1 0x0 PRP2 0x0 00:23:59.145 [2024-07-15 02:25:37.556762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:59.145 [2024-07-15 02:25:37.556822] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b5d020 was disconnected and freed. reset controller. 00:23:59.145 [2024-07-15 02:25:37.558123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.145 [2024-07-15 02:25:37.558222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b61e40 (9): Bad file descriptor 00:23:59.145 [2024-07-15 02:25:37.558337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.145 [2024-07-15 02:25:37.558395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.145 [2024-07-15 02:25:37.558417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b61e40 with addr=10.0.0.2, port=4421 00:23:59.145 [2024-07-15 02:25:37.558433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61e40 is same with the state(5) to be set 00:23:59.145 [2024-07-15 02:25:37.558473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b61e40 (9): Bad file descriptor 00:23:59.145 [2024-07-15 02:25:37.558497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.145 [2024-07-15 02:25:37.558511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.145 [2024-07-15 02:25:37.558526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.145 [2024-07-15 02:25:37.558551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:59.145 [2024-07-15 02:25:37.558566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.145 [2024-07-15 02:25:47.618700] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:59.145 Received shutdown signal, test time was about 55.149156 seconds 00:23:59.145 00:23:59.145 Latency(us) 00:23:59.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.145 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.145 Verification LBA range: start 0x0 length 0x4000 00:23:59.145 Nvme0n1 : 55.15 11161.87 43.60 0.00 0.00 11449.55 729.83 7076934.75 00:23:59.145 =================================================================================================================== 00:23:59.145 Total : 11161.87 43.60 0.00 0.00 11449.55 729.83 7076934.75 00:23:59.145 02:25:57 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.145 02:25:58 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:59.145 02:25:58 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:59.145 02:25:58 -- host/multipath.sh@125 -- # nvmftestfini 00:23:59.145 02:25:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:59.145 02:25:58 -- nvmf/common.sh@116 -- # sync 00:23:59.145 02:25:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:59.145 02:25:58 -- nvmf/common.sh@119 -- # set +e 00:23:59.145 02:25:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:59.145 02:25:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:59.145 rmmod nvme_tcp 00:23:59.145 rmmod nvme_fabrics 00:23:59.145 rmmod nvme_keyring 00:23:59.145 02:25:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:59.145 02:25:58 -- nvmf/common.sh@123 -- # set -e 00:23:59.145 02:25:58 -- nvmf/common.sh@124 -- # return 0 00:23:59.145 02:25:58 -- nvmf/common.sh@477 -- # '[' -n 97944 ']' 00:23:59.145 02:25:58 -- nvmf/common.sh@478 -- # killprocess 97944 00:23:59.145 02:25:58 -- common/autotest_common.sh@926 -- # '[' -z 97944 ']' 00:23:59.145 02:25:58 -- common/autotest_common.sh@930 -- # kill -0 97944 00:23:59.145 02:25:58 -- common/autotest_common.sh@931 -- # uname 00:23:59.145 02:25:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:59.145 02:25:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97944 00:23:59.145 killing process with pid 97944 00:23:59.145 02:25:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:59.145 02:25:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:59.145 02:25:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97944' 00:23:59.145 02:25:58 -- common/autotest_common.sh@945 -- # kill 97944 00:23:59.145 02:25:58 -- common/autotest_common.sh@950 -- # wait 97944 00:23:59.145 02:25:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:59.145 02:25:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:59.145 02:25:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:59.145 02:25:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.145 02:25:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:59.145 02:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.145 02:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.145 02:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.145 02:25:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:59.145 00:23:59.145 real 1m1.022s 00:23:59.145 user 2m52.133s 00:23:59.145 sys 0m14.142s 00:23:59.145 02:25:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.145 02:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:59.145 ************************************ 00:23:59.145 END TEST nvmf_multipath 00:23:59.145 ************************************ 00:23:59.146 02:25:58 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:59.146 02:25:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:59.146 02:25:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:59.146 02:25:58 -- common/autotest_common.sh@10 -- # set +x 00:23:59.146 ************************************ 00:23:59.146 START TEST nvmf_timeout 00:23:59.146 ************************************ 00:23:59.146 02:25:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:59.146 * Looking for test storage... 00:23:59.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:59.146 02:25:58 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.146 02:25:58 -- nvmf/common.sh@7 -- # uname -s 00:23:59.146 02:25:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.146 02:25:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.146 02:25:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.146 02:25:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.146 02:25:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.146 02:25:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.146 02:25:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.146 02:25:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.146 02:25:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.146 02:25:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.146 02:25:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:23:59.146 02:25:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:23:59.146 02:25:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.146 02:25:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.146 02:25:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.146 02:25:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.146 02:25:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.146 02:25:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.146 02:25:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.146 02:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.146 02:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.146 02:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.146 02:25:58 -- paths/export.sh@5 -- # export PATH 00:23:59.146 02:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.146 02:25:58 -- nvmf/common.sh@46 -- # : 0 00:23:59.146 02:25:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:59.146 02:25:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:59.146 02:25:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:59.146 02:25:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.146 02:25:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.146 02:25:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:59.146 02:25:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:59.146 02:25:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:59.146 02:25:58 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:59.146 02:25:58 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:59.146 02:25:58 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:59.146 02:25:58 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:59.146 02:25:58 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.146 02:25:58 -- host/timeout.sh@19 -- # nvmftestinit 00:23:59.146 02:25:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:59.146 02:25:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.146 02:25:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:59.146 02:25:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:59.146 02:25:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:59.146 02:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.146 02:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.146 02:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.405 02:25:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:59.405 02:25:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:59.405 02:25:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:59.405 02:25:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:59.405 02:25:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:59.405 02:25:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:59.405 02:25:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.405 02:25:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.405 02:25:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:59.405 02:25:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:59.405 02:25:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:59.405 02:25:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:59.405 02:25:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:59.405 02:25:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.405 02:25:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:59.405 02:25:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:59.405 02:25:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:59.405 02:25:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:59.405 02:25:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:59.405 02:25:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:59.405 Cannot find device "nvmf_tgt_br" 00:23:59.405 02:25:58 -- nvmf/common.sh@154 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.405 Cannot find device "nvmf_tgt_br2" 00:23:59.405 02:25:58 -- nvmf/common.sh@155 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:59.405 02:25:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:59.405 Cannot find device "nvmf_tgt_br" 00:23:59.405 02:25:58 -- nvmf/common.sh@157 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:59.405 Cannot find device "nvmf_tgt_br2" 00:23:59.405 02:25:58 -- nvmf/common.sh@158 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:59.405 02:25:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:59.405 02:25:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.405 02:25:58 -- nvmf/common.sh@161 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.405 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.405 02:25:58 -- nvmf/common.sh@162 -- # true 00:23:59.405 02:25:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:59.405 02:25:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:59.405 02:25:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:59.405 02:25:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:59.405 02:25:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:59.405 02:25:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:59.405 02:25:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:59.405 02:25:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:59.405 02:25:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:59.405 02:25:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:59.405 02:25:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:59.405 02:25:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:59.405 02:25:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:59.405 02:25:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:59.405 02:25:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:59.405 02:25:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:59.405 02:25:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:59.405 02:25:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:59.405 02:25:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:59.405 02:25:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:59.405 02:25:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:59.405 02:25:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:59.664 02:25:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:59.664 02:25:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:59.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:59.664 00:23:59.664 --- 10.0.0.2 ping statistics --- 00:23:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.664 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:59.664 02:25:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:59.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:59.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:59.664 00:23:59.664 --- 10.0.0.3 ping statistics --- 00:23:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.664 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:59.664 02:25:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:59.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:23:59.664 00:23:59.664 --- 10.0.0.1 ping statistics --- 00:23:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.664 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:59.664 02:25:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.664 02:25:58 -- nvmf/common.sh@421 -- # return 0 00:23:59.664 02:25:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:59.664 02:25:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.664 02:25:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:59.664 02:25:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:59.664 02:25:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.664 02:25:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:59.664 02:25:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:59.664 02:25:58 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:59.664 02:25:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:59.664 02:25:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:59.664 02:25:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.664 02:25:59 -- nvmf/common.sh@469 -- # nvmfpid=99307 00:23:59.664 02:25:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:59.664 02:25:59 -- nvmf/common.sh@470 -- # waitforlisten 99307 00:23:59.664 02:25:59 -- common/autotest_common.sh@819 -- # '[' -z 99307 ']' 00:23:59.664 02:25:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.664 02:25:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:59.664 02:25:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.664 02:25:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:59.664 02:25:59 -- common/autotest_common.sh@10 -- # set +x 00:23:59.664 [2024-07-15 02:25:59.060366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:59.664 [2024-07-15 02:25:59.061068] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.664 [2024-07-15 02:25:59.201045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:59.923 [2024-07-15 02:25:59.278673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:59.923 [2024-07-15 02:25:59.278872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.923 [2024-07-15 02:25:59.278884] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.923 [2024-07-15 02:25:59.278892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.923 [2024-07-15 02:25:59.279035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.923 [2024-07-15 02:25:59.279296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.489 02:26:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:00.489 02:26:00 -- common/autotest_common.sh@852 -- # return 0 00:24:00.489 02:26:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:00.489 02:26:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:00.489 02:26:00 -- common/autotest_common.sh@10 -- # set +x 00:24:00.747 02:26:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.747 02:26:00 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.747 02:26:00 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:00.747 [2024-07-15 02:26:00.299409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.004 02:26:00 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:01.262 Malloc0 00:24:01.262 02:26:00 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.521 02:26:00 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.779 02:26:01 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.779 [2024-07-15 02:26:01.335947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.037 02:26:01 -- host/timeout.sh@32 -- # bdevperf_pid=99399 00:24:02.037 02:26:01 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:02.037 02:26:01 -- host/timeout.sh@34 -- # waitforlisten 99399 /var/tmp/bdevperf.sock 00:24:02.037 02:26:01 -- common/autotest_common.sh@819 -- # '[' -z 99399 ']' 00:24:02.037 02:26:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.037 02:26:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:02.037 02:26:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.037 02:26:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:02.037 02:26:01 -- common/autotest_common.sh@10 -- # set +x 00:24:02.037 [2024-07-15 02:26:01.400663] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:02.037 [2024-07-15 02:26:01.400758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99399 ] 00:24:02.037 [2024-07-15 02:26:01.535097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.314 [2024-07-15 02:26:01.620114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.259 02:26:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:03.259 02:26:02 -- common/autotest_common.sh@852 -- # return 0 00:24:03.259 02:26:02 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:03.259 02:26:02 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:03.518 NVMe0n1 00:24:03.518 02:26:02 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.518 02:26:02 -- host/timeout.sh@51 -- # rpc_pid=99447 00:24:03.518 02:26:02 -- host/timeout.sh@53 -- # sleep 1 00:24:03.518 Running I/O for 10 seconds... 00:24:04.450 02:26:03 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.727 [2024-07-15 02:26:04.183932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2461360 is same with the state(5) to be set 00:24:04.727 [2024-07-15 02:26:04.184852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.727 [2024-07-15 02:26:04.184882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.727 [2024-07-15 02:26:04.184905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.727 [2024-07-15 02:26:04.184917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.727 [2024-07-15 02:26:04.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.727 [2024-07-15 02:26:04.184938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.727 [2024-07-15 02:26:04.184949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.727 [2024-07-15 02:26:04.184959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.727 [2024-07-15 02:26:04.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.184982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.184993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.728 [2024-07-15 02:26:04.185464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.728 [2024-07-15 02:26:04.185872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.728 [2024-07-15 02:26:04.185893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.728 [2024-07-15 02:26:04.185915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.728 [2024-07-15 02:26:04.185926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.185938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.185947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.185958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.185967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.185979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.185989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.729 [2024-07-15 02:26:04.186784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.729 [2024-07-15 02:26:04.186804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.729 [2024-07-15 02:26:04.186815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.186844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.186869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.186910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.186982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.186991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.730 [2024-07-15 02:26:04.187378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.730 [2024-07-15 02:26:04.187583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140980 is same with the state(5) to be set 00:24:04.730 [2024-07-15 02:26:04.187618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:04.730 [2024-07-15 02:26:04.187627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:04.730 [2024-07-15 02:26:04.187635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1936 len:8 PRP1 0x0 PRP2 0x0 00:24:04.730 [2024-07-15 02:26:04.187644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.730 [2024-07-15 02:26:04.187699] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1140980 was disconnected and freed. reset controller. 00:24:04.730 [2024-07-15 02:26:04.187949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.731 [2024-07-15 02:26:04.188027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11356b0 (9): Bad file descriptor 00:24:04.731 [2024-07-15 02:26:04.188134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.731 [2024-07-15 02:26:04.188182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.731 [2024-07-15 02:26:04.188198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11356b0 with addr=10.0.0.2, port=4420 00:24:04.731 [2024-07-15 02:26:04.188216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11356b0 is same with the state(5) to be set 00:24:04.731 [2024-07-15 02:26:04.188235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11356b0 (9): Bad file descriptor 00:24:04.731 [2024-07-15 02:26:04.188251] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:04.731 [2024-07-15 02:26:04.188260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:04.731 [2024-07-15 02:26:04.188271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:04.731 [2024-07-15 02:26:04.188291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:04.731 [2024-07-15 02:26:04.188301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:04.731 02:26:04 -- host/timeout.sh@56 -- # sleep 2 00:24:07.259 [2024-07-15 02:26:06.188486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.259 [2024-07-15 02:26:06.188595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:07.259 [2024-07-15 02:26:06.188657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11356b0 with addr=10.0.0.2, port=4420 00:24:07.259 [2024-07-15 02:26:06.188673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11356b0 is same with the state(5) to be set 00:24:07.259 [2024-07-15 02:26:06.188701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11356b0 (9): Bad file descriptor 00:24:07.259 [2024-07-15 02:26:06.188734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.259 [2024-07-15 02:26:06.188754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:07.259 [2024-07-15 02:26:06.188765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.259 [2024-07-15 02:26:06.188792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:07.259 [2024-07-15 02:26:06.188804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.259 02:26:06 -- host/timeout.sh@57 -- # get_controller 00:24:07.259 02:26:06 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.259 02:26:06 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:07.259 02:26:06 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:07.259 02:26:06 -- host/timeout.sh@58 -- # get_bdev 00:24:07.259 02:26:06 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:07.259 02:26:06 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:07.259 02:26:06 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:07.259 02:26:06 -- host/timeout.sh@61 -- # sleep 5 00:24:09.157 [2024-07-15 02:26:08.188922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.157 [2024-07-15 02:26:08.189020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.157 [2024-07-15 02:26:08.189039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11356b0 with addr=10.0.0.2, port=4420 00:24:09.157 [2024-07-15 02:26:08.189053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11356b0 is same with the state(5) to be set 00:24:09.157 [2024-07-15 02:26:08.189078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11356b0 (9): Bad file descriptor 00:24:09.157 [2024-07-15 02:26:08.189098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.157 [2024-07-15 02:26:08.189107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.157 [2024-07-15 02:26:08.189117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.157 [2024-07-15 02:26:08.189143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.157 [2024-07-15 02:26:08.189154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.058 [2024-07-15 02:26:10.189300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.058 [2024-07-15 02:26:10.189384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.058 [2024-07-15 02:26:10.189396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.058 [2024-07-15 02:26:10.189408] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:11.058 [2024-07-15 02:26:10.189436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.647 00:24:11.647 Latency(us) 00:24:11.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.647 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.647 Verification LBA range: start 0x0 length 0x4000 00:24:11.647 NVMe0n1 : 8.16 2029.36 7.93 15.68 0.00 62501.51 2740.60 7015926.69 00:24:11.647 =================================================================================================================== 00:24:11.647 Total : 2029.36 7.93 15.68 0.00 62501.51 2740.60 7015926.69 00:24:11.647 0 00:24:12.215 02:26:11 -- host/timeout.sh@62 -- # get_controller 00:24:12.215 02:26:11 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.215 02:26:11 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:12.473 02:26:12 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:12.473 02:26:12 -- host/timeout.sh@63 -- # get_bdev 00:24:12.473 02:26:12 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:12.473 02:26:12 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:13.041 02:26:12 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:13.041 02:26:12 -- host/timeout.sh@65 -- # wait 99447 00:24:13.041 02:26:12 -- host/timeout.sh@67 -- # killprocess 99399 00:24:13.041 02:26:12 -- common/autotest_common.sh@926 -- # '[' -z 99399 ']' 00:24:13.041 02:26:12 -- common/autotest_common.sh@930 -- # kill -0 99399 00:24:13.041 02:26:12 -- common/autotest_common.sh@931 -- # uname 00:24:13.042 02:26:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:13.042 02:26:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99399 00:24:13.042 02:26:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:13.042 02:26:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:13.042 killing process with pid 99399 00:24:13.042 02:26:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99399' 00:24:13.042 Received shutdown signal, test time was about 9.292834 seconds 00:24:13.042 00:24:13.042 Latency(us) 00:24:13.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.042 =================================================================================================================== 00:24:13.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.042 02:26:12 -- common/autotest_common.sh@945 -- # kill 99399 00:24:13.042 02:26:12 -- common/autotest_common.sh@950 -- # wait 99399 00:24:13.042 02:26:12 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.300 [2024-07-15 02:26:12.755691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.300 02:26:12 -- host/timeout.sh@74 -- # bdevperf_pid=99605 00:24:13.300 02:26:12 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:13.300 02:26:12 -- host/timeout.sh@76 -- # waitforlisten 99605 /var/tmp/bdevperf.sock 00:24:13.300 02:26:12 -- common/autotest_common.sh@819 -- # '[' -z 99605 ']' 00:24:13.300 02:26:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.300 02:26:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:13.300 02:26:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.300 02:26:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:13.300 02:26:12 -- common/autotest_common.sh@10 -- # set +x 00:24:13.300 [2024-07-15 02:26:12.820753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:13.300 [2024-07-15 02:26:12.820829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99605 ] 00:24:13.558 [2024-07-15 02:26:12.949816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.558 [2024-07-15 02:26:13.037950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.501 02:26:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:14.501 02:26:13 -- common/autotest_common.sh@852 -- # return 0 00:24:14.501 02:26:13 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:14.501 02:26:14 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:14.759 NVMe0n1 00:24:15.016 02:26:14 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.016 02:26:14 -- host/timeout.sh@84 -- # rpc_pid=99652 00:24:15.016 02:26:14 -- host/timeout.sh@86 -- # sleep 1 00:24:15.016 Running I/O for 10 seconds... 00:24:15.950 02:26:15 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.211 [2024-07-15 02:26:15.529134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466940 is same with the state(5) to be set 00:24:16.211 [2024-07-15 02:26:15.529978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.211 [2024-07-15 02:26:15.530168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.211 [2024-07-15 02:26:15.530179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.530973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.530985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.530995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.531006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.531016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.531028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.531037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.531049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.212 [2024-07-15 02:26:15.531058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.531070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.531080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.212 [2024-07-15 02:26:15.531091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.212 [2024-07-15 02:26:15.531101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.213 [2024-07-15 02:26:15.531973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.213 [2024-07-15 02:26:15.531985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.213 [2024-07-15 02:26:15.531994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.214 [2024-07-15 02:26:15.532439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.214 [2024-07-15 02:26:15.532769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac860 is same with the state(5) to be set 00:24:16.214 [2024-07-15 02:26:15.532792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:16.214 [2024-07-15 02:26:15.532800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:16.214 [2024-07-15 02:26:15.532808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120928 len:8 PRP1 0x0 PRP2 0x0 00:24:16.214 [2024-07-15 02:26:15.532817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532872] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ac860 was disconnected and freed. reset controller. 00:24:16.214 [2024-07-15 02:26:15.532952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.214 [2024-07-15 02:26:15.532980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.532993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.214 [2024-07-15 02:26:15.533002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.214 [2024-07-15 02:26:15.533024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.215 [2024-07-15 02:26:15.533033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.215 [2024-07-15 02:26:15.533044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:16.215 [2024-07-15 02:26:15.533053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:16.215 [2024-07-15 02:26:15.533062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:16.215 [2024-07-15 02:26:15.533280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.215 [2024-07-15 02:26:15.533312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:16.215 [2024-07-15 02:26:15.533412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.215 [2024-07-15 02:26:15.533463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:16.215 [2024-07-15 02:26:15.533481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:16.215 [2024-07-15 02:26:15.533493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:16.215 [2024-07-15 02:26:15.533511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:16.215 [2024-07-15 02:26:15.533541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:16.215 [2024-07-15 02:26:15.533558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:16.215 [2024-07-15 02:26:15.533569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:16.215 [2024-07-15 02:26:15.533589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.215 [2024-07-15 02:26:15.533614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.215 02:26:15 -- host/timeout.sh@90 -- # sleep 1 00:24:17.147 [2024-07-15 02:26:16.533744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.147 [2024-07-15 02:26:16.533858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.147 [2024-07-15 02:26:16.533878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:17.147 [2024-07-15 02:26:16.533892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:17.147 [2024-07-15 02:26:16.533930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:17.147 [2024-07-15 02:26:16.533966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.147 [2024-07-15 02:26:16.533978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.147 [2024-07-15 02:26:16.533990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.147 [2024-07-15 02:26:16.534017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.147 [2024-07-15 02:26:16.534029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.147 02:26:16 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.405 [2024-07-15 02:26:16.758825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.405 02:26:16 -- host/timeout.sh@92 -- # wait 99652 00:24:18.340 [2024-07-15 02:26:17.549955] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:25.031 00:24:25.031 Latency(us) 00:24:25.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.031 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:25.031 Verification LBA range: start 0x0 length 0x4000 00:24:25.031 NVMe0n1 : 10.01 9484.40 37.05 0.00 0.00 13472.81 1347.96 3019898.88 00:24:25.031 =================================================================================================================== 00:24:25.031 Total : 9484.40 37.05 0.00 0.00 13472.81 1347.96 3019898.88 00:24:25.031 0 00:24:25.031 02:26:24 -- host/timeout.sh@97 -- # rpc_pid=99769 00:24:25.031 02:26:24 -- host/timeout.sh@98 -- # sleep 1 00:24:25.031 02:26:24 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.031 Running I/O for 10 seconds... 00:24:25.964 02:26:25 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.225 [2024-07-15 02:26:25.698485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.225 [2024-07-15 02:26:25.698904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.698997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c35f0 is same with the state(5) to be set 00:24:26.226 [2024-07-15 02:26:25.699400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.699979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.699989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.700001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.700010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.700021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.700031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.700043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.700052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.700063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.226 [2024-07-15 02:26:25.700072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.226 [2024-07-15 02:26:25.700084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.227 [2024-07-15 02:26:25.700954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.227 [2024-07-15 02:26:25.700990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-15 02:26:25.700999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-15 02:26:25.701891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.228 [2024-07-15 02:26:25.701912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.228 [2024-07-15 02:26:25.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.229 [2024-07-15 02:26:25.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.701954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.701965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.701977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.701986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.701997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.229 [2024-07-15 02:26:25.702007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.229 [2024-07-15 02:26:25.702027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.229 [2024-07-15 02:26:25.702048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-15 02:26:25.702207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbc70 is same with the state(5) to be set 00:24:26.229 [2024-07-15 02:26:25.702231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:26.229 [2024-07-15 02:26:25.702239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:26.229 [2024-07-15 02:26:25.702247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120888 len:8 PRP1 0x0 PRP2 0x0 00:24:26.229 [2024-07-15 02:26:25.702256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.229 [2024-07-15 02:26:25.702313] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8dbc70 was disconnected and freed. reset controller. 00:24:26.229 [2024-07-15 02:26:25.702553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.229 [2024-07-15 02:26:25.702651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:26.229 [2024-07-15 02:26:25.702758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.229 [2024-07-15 02:26:25.702807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:26.229 [2024-07-15 02:26:25.702824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:26.229 [2024-07-15 02:26:25.702835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:26.229 [2024-07-15 02:26:25.702854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:26.229 [2024-07-15 02:26:25.702871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:26.229 [2024-07-15 02:26:25.702882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:26.229 [2024-07-15 02:26:25.702892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:26.229 [2024-07-15 02:26:25.702913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:26.229 [2024-07-15 02:26:25.702924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.229 02:26:25 -- host/timeout.sh@101 -- # sleep 3 00:24:27.164 [2024-07-15 02:26:26.703052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.164 [2024-07-15 02:26:26.703189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:27.164 [2024-07-15 02:26:26.703210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:27.164 [2024-07-15 02:26:26.703224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:27.164 [2024-07-15 02:26:26.703252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:27.164 [2024-07-15 02:26:26.703273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:27.164 [2024-07-15 02:26:26.703284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:27.164 [2024-07-15 02:26:26.703295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:27.164 [2024-07-15 02:26:26.703324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:27.164 [2024-07-15 02:26:26.703337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.539 [2024-07-15 02:26:27.703456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.539 [2024-07-15 02:26:27.703563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.539 [2024-07-15 02:26:27.703583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:28.539 [2024-07-15 02:26:27.703609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:28.539 [2024-07-15 02:26:27.703639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:28.539 [2024-07-15 02:26:27.703662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.539 [2024-07-15 02:26:27.703672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.539 [2024-07-15 02:26:27.703683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.539 [2024-07-15 02:26:27.703710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.539 [2024-07-15 02:26:27.703723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.475 [2024-07-15 02:26:28.705566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.475 [2024-07-15 02:26:28.705729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.475 [2024-07-15 02:26:28.705750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a14f0 with addr=10.0.0.2, port=4420 00:24:29.475 [2024-07-15 02:26:28.705765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a14f0 is same with the state(5) to be set 00:24:29.475 [2024-07-15 02:26:28.705991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a14f0 (9): Bad file descriptor 00:24:29.475 [2024-07-15 02:26:28.706134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.475 [2024-07-15 02:26:28.706157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.475 [2024-07-15 02:26:28.706169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.475 [2024-07-15 02:26:28.708559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.475 [2024-07-15 02:26:28.708626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.475 02:26:28 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.475 [2024-07-15 02:26:28.955365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.475 02:26:28 -- host/timeout.sh@103 -- # wait 99769 00:24:30.407 [2024-07-15 02:26:29.741301] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:35.672 00:24:35.672 Latency(us) 00:24:35.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.672 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:35.672 Verification LBA range: start 0x0 length 0x4000 00:24:35.672 NVMe0n1 : 10.01 8310.77 32.46 6010.27 0.00 8923.66 714.94 3019898.88 00:24:35.672 =================================================================================================================== 00:24:35.672 Total : 8310.77 32.46 6010.27 0.00 8923.66 0.00 3019898.88 00:24:35.672 0 00:24:35.672 02:26:34 -- host/timeout.sh@105 -- # killprocess 99605 00:24:35.672 02:26:34 -- common/autotest_common.sh@926 -- # '[' -z 99605 ']' 00:24:35.672 02:26:34 -- common/autotest_common.sh@930 -- # kill -0 99605 00:24:35.672 02:26:34 -- common/autotest_common.sh@931 -- # uname 00:24:35.672 02:26:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:35.672 02:26:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99605 00:24:35.672 killing process with pid 99605 00:24:35.672 Received shutdown signal, test time was about 10.000000 seconds 00:24:35.672 00:24:35.672 Latency(us) 00:24:35.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.672 =================================================================================================================== 00:24:35.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.672 02:26:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:35.672 02:26:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:35.672 02:26:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99605' 00:24:35.672 02:26:34 -- common/autotest_common.sh@945 -- # kill 99605 00:24:35.672 02:26:34 -- common/autotest_common.sh@950 -- # wait 99605 00:24:35.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.672 02:26:34 -- host/timeout.sh@110 -- # bdevperf_pid=99890 00:24:35.673 02:26:34 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:35.673 02:26:34 -- host/timeout.sh@112 -- # waitforlisten 99890 /var/tmp/bdevperf.sock 00:24:35.673 02:26:34 -- common/autotest_common.sh@819 -- # '[' -z 99890 ']' 00:24:35.673 02:26:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.673 02:26:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:35.673 02:26:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.673 02:26:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:35.673 02:26:34 -- common/autotest_common.sh@10 -- # set +x 00:24:35.673 [2024-07-15 02:26:34.873397] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:35.673 [2024-07-15 02:26:34.873499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99890 ] 00:24:35.673 [2024-07-15 02:26:35.011199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.673 [2024-07-15 02:26:35.097331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.239 02:26:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:36.239 02:26:35 -- common/autotest_common.sh@852 -- # return 0 00:24:36.239 02:26:35 -- host/timeout.sh@116 -- # dtrace_pid=99919 00:24:36.239 02:26:35 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99890 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:36.239 02:26:35 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:36.498 02:26:36 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:36.757 NVMe0n1 00:24:37.015 02:26:36 -- host/timeout.sh@124 -- # rpc_pid=99977 00:24:37.015 02:26:36 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.015 02:26:36 -- host/timeout.sh@125 -- # sleep 1 00:24:37.015 Running I/O for 10 seconds... 00:24:37.952 02:26:37 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.213 [2024-07-15 02:26:37.571918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.213 [2024-07-15 02:26:37.571977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.571988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.571997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c62f0 is same with the state(5) to be set 00:24:38.214 [2024-07-15 02:26:37.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.572977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.572994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.214 [2024-07-15 02:26:37.573119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.214 [2024-07-15 02:26:37.573130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.573987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.215 [2024-07-15 02:26:37.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.215 [2024-07-15 02:26:37.574007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.216 [2024-07-15 02:26:37.574881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.216 [2024-07-15 02:26:37.574890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.574900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.574910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.574920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.574929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.574940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.574949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.574960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.574970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.574981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.574990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.217 [2024-07-15 02:26:37.575294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.217 [2024-07-15 02:26:37.575333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.217 [2024-07-15 02:26:37.575342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:8 PRP1 0x0 PRP2 0x0 00:24:38.217 [2024-07-15 02:26:37.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.217 [2024-07-15 02:26:37.575411] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1079a10 was disconnected and freed. reset controller. 00:24:38.217 [2024-07-15 02:26:37.575703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.217 [2024-07-15 02:26:37.575800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e510 (9): Bad file descriptor 00:24:38.217 [2024-07-15 02:26:37.575947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.217 [2024-07-15 02:26:37.576007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.217 [2024-07-15 02:26:37.576029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106e510 with addr=10.0.0.2, port=4420 00:24:38.217 [2024-07-15 02:26:37.576041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106e510 is same with the state(5) to be set 00:24:38.217 [2024-07-15 02:26:37.576063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e510 (9): Bad file descriptor 00:24:38.217 [2024-07-15 02:26:37.576081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.217 [2024-07-15 02:26:37.576091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.217 [2024-07-15 02:26:37.576101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.217 [2024-07-15 02:26:37.576122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.217 [2024-07-15 02:26:37.576134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.217 02:26:37 -- host/timeout.sh@128 -- # wait 99977 00:24:40.121 [2024-07-15 02:26:39.576331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.121 [2024-07-15 02:26:39.576437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.121 [2024-07-15 02:26:39.576458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106e510 with addr=10.0.0.2, port=4420 00:24:40.121 [2024-07-15 02:26:39.576482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106e510 is same with the state(5) to be set 00:24:40.121 [2024-07-15 02:26:39.576521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e510 (9): Bad file descriptor 00:24:40.121 [2024-07-15 02:26:39.576544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.121 [2024-07-15 02:26:39.576555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.121 [2024-07-15 02:26:39.576566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.121 [2024-07-15 02:26:39.576594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.121 [2024-07-15 02:26:39.576622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.024 [2024-07-15 02:26:41.576797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.024 [2024-07-15 02:26:41.576911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.024 [2024-07-15 02:26:41.576931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x106e510 with addr=10.0.0.2, port=4420 00:24:42.024 [2024-07-15 02:26:41.576945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106e510 is same with the state(5) to be set 00:24:42.024 [2024-07-15 02:26:41.576972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x106e510 (9): Bad file descriptor 00:24:42.024 [2024-07-15 02:26:41.577004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.024 [2024-07-15 02:26:41.577017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.024 [2024-07-15 02:26:41.577028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.024 [2024-07-15 02:26:41.577055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.024 [2024-07-15 02:26:41.577067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.555 [2024-07-15 02:26:43.577149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.555 [2024-07-15 02:26:43.577221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.555 [2024-07-15 02:26:43.577233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.555 [2024-07-15 02:26:43.577244] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:44.555 [2024-07-15 02:26:43.577273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.121 00:24:45.121 Latency(us) 00:24:45.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.121 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:45.121 NVMe0n1 : 8.13 2821.56 11.02 15.74 0.00 45072.96 2323.55 7015926.69 00:24:45.121 =================================================================================================================== 00:24:45.121 Total : 2821.56 11.02 15.74 0.00 45072.96 2323.55 7015926.69 00:24:45.121 0 00:24:45.121 02:26:44 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:45.121 Attaching 5 probes... 00:24:45.121 1269.668201: reset bdev controller NVMe0 00:24:45.121 1269.844059: reconnect bdev controller NVMe0 00:24:45.121 3270.137910: reconnect delay bdev controller NVMe0 00:24:45.121 3270.181121: reconnect bdev controller NVMe0 00:24:45.121 5270.633886: reconnect delay bdev controller NVMe0 00:24:45.121 5270.655589: reconnect bdev controller NVMe0 00:24:45.121 7271.076366: reconnect delay bdev controller NVMe0 00:24:45.121 7271.119041: reconnect bdev controller NVMe0 00:24:45.121 02:26:44 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:45.121 02:26:44 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:45.121 02:26:44 -- host/timeout.sh@136 -- # kill 99919 00:24:45.121 02:26:44 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:45.121 02:26:44 -- host/timeout.sh@139 -- # killprocess 99890 00:24:45.121 02:26:44 -- common/autotest_common.sh@926 -- # '[' -z 99890 ']' 00:24:45.121 02:26:44 -- common/autotest_common.sh@930 -- # kill -0 99890 00:24:45.121 02:26:44 -- common/autotest_common.sh@931 -- # uname 00:24:45.121 02:26:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.121 02:26:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99890 00:24:45.121 killing process with pid 99890 00:24:45.121 Received shutdown signal, test time was about 8.192249 seconds 00:24:45.121 00:24:45.121 Latency(us) 00:24:45.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.121 =================================================================================================================== 00:24:45.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.121 02:26:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:45.121 02:26:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:45.121 02:26:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99890' 00:24:45.121 02:26:44 -- common/autotest_common.sh@945 -- # kill 99890 00:24:45.121 02:26:44 -- common/autotest_common.sh@950 -- # wait 99890 00:24:45.381 02:26:44 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.640 02:26:45 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:45.640 02:26:45 -- host/timeout.sh@145 -- # nvmftestfini 00:24:45.640 02:26:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.640 02:26:45 -- nvmf/common.sh@116 -- # sync 00:24:45.640 02:26:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:45.640 02:26:45 -- nvmf/common.sh@119 -- # set +e 00:24:45.640 02:26:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.640 02:26:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:45.640 rmmod nvme_tcp 00:24:45.640 rmmod nvme_fabrics 00:24:45.640 rmmod nvme_keyring 00:24:45.898 02:26:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.898 02:26:45 -- nvmf/common.sh@123 -- # set -e 00:24:45.898 02:26:45 -- nvmf/common.sh@124 -- # return 0 00:24:45.898 02:26:45 -- nvmf/common.sh@477 -- # '[' -n 99307 ']' 00:24:45.898 02:26:45 -- nvmf/common.sh@478 -- # killprocess 99307 00:24:45.898 02:26:45 -- common/autotest_common.sh@926 -- # '[' -z 99307 ']' 00:24:45.898 02:26:45 -- common/autotest_common.sh@930 -- # kill -0 99307 00:24:45.898 02:26:45 -- common/autotest_common.sh@931 -- # uname 00:24:45.898 02:26:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.898 02:26:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99307 00:24:45.898 killing process with pid 99307 00:24:45.898 02:26:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:45.898 02:26:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:45.898 02:26:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99307' 00:24:45.898 02:26:45 -- common/autotest_common.sh@945 -- # kill 99307 00:24:45.898 02:26:45 -- common/autotest_common.sh@950 -- # wait 99307 00:24:46.157 02:26:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:46.157 02:26:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:46.157 02:26:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:46.157 02:26:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.157 02:26:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:46.157 02:26:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.157 02:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.157 02:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.157 02:26:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:46.157 ************************************ 00:24:46.157 END TEST nvmf_timeout 00:24:46.157 ************************************ 00:24:46.157 00:24:46.157 real 0m46.957s 00:24:46.157 user 2m17.887s 00:24:46.157 sys 0m5.179s 00:24:46.157 02:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.157 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 02:26:45 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:46.157 02:26:45 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:46.157 02:26:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:46.157 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 02:26:45 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:46.157 00:24:46.157 real 17m9.806s 00:24:46.157 user 54m28.748s 00:24:46.157 sys 3m50.049s 00:24:46.157 02:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.157 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.157 ************************************ 00:24:46.157 END TEST nvmf_tcp 00:24:46.157 ************************************ 00:24:46.157 02:26:45 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:46.157 02:26:45 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:46.157 02:26:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:46.158 02:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:46.158 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.158 ************************************ 00:24:46.158 START TEST spdkcli_nvmf_tcp 00:24:46.158 ************************************ 00:24:46.158 02:26:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:46.158 * Looking for test storage... 00:24:46.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:46.158 02:26:45 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:46.158 02:26:45 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:46.158 02:26:45 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:46.158 02:26:45 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:46.158 02:26:45 -- nvmf/common.sh@7 -- # uname -s 00:24:46.158 02:26:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.158 02:26:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.158 02:26:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.158 02:26:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.158 02:26:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.158 02:26:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.158 02:26:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.158 02:26:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.158 02:26:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.158 02:26:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.158 02:26:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:24:46.158 02:26:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:24:46.158 02:26:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.158 02:26:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.158 02:26:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:46.158 02:26:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.158 02:26:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.158 02:26:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.158 02:26:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.158 02:26:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.158 02:26:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.158 02:26:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.158 02:26:45 -- paths/export.sh@5 -- # export PATH 00:24:46.158 02:26:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.158 02:26:45 -- nvmf/common.sh@46 -- # : 0 00:24:46.158 02:26:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:46.158 02:26:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:46.158 02:26:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:46.158 02:26:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.158 02:26:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.158 02:26:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:46.158 02:26:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:46.417 02:26:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:46.417 02:26:45 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:46.417 02:26:45 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:46.417 02:26:45 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:46.417 02:26:45 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:46.417 02:26:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:46.417 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.417 02:26:45 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:46.417 02:26:45 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100190 00:24:46.417 02:26:45 -- spdkcli/common.sh@34 -- # waitforlisten 100190 00:24:46.417 02:26:45 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:46.417 02:26:45 -- common/autotest_common.sh@819 -- # '[' -z 100190 ']' 00:24:46.417 02:26:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.417 02:26:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.417 02:26:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.417 02:26:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:46.417 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:24:46.417 [2024-07-15 02:26:45.777414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:46.417 [2024-07-15 02:26:45.777540] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100190 ] 00:24:46.417 [2024-07-15 02:26:45.909540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:46.676 [2024-07-15 02:26:45.999586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:46.676 [2024-07-15 02:26:45.999924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.676 [2024-07-15 02:26:45.999935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.243 02:26:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:47.243 02:26:46 -- common/autotest_common.sh@852 -- # return 0 00:24:47.243 02:26:46 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:47.244 02:26:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:47.244 02:26:46 -- common/autotest_common.sh@10 -- # set +x 00:24:47.244 02:26:46 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:47.244 02:26:46 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:47.244 02:26:46 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:47.244 02:26:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:47.244 02:26:46 -- common/autotest_common.sh@10 -- # set +x 00:24:47.244 02:26:46 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:47.244 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:47.244 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:47.244 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:47.244 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:47.244 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:47.244 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:47.244 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.244 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.244 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:47.244 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:47.244 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:47.244 ' 00:24:47.810 [2024-07-15 02:26:47.175002] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:50.343 [2024-07-15 02:26:49.413542] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.275 [2024-07-15 02:26:50.690592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:53.823 [2024-07-15 02:26:53.028303] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:55.723 [2024-07-15 02:26:55.053688] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:57.097 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:57.097 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:57.098 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:57.098 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:57.098 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:57.098 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:57.098 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:57.098 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.098 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.098 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:57.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:57.098 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:57.356 02:26:56 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:57.356 02:26:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:57.356 02:26:56 -- common/autotest_common.sh@10 -- # set +x 00:24:57.356 02:26:56 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:57.356 02:26:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:57.356 02:26:56 -- common/autotest_common.sh@10 -- # set +x 00:24:57.356 02:26:56 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:57.356 02:26:56 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:24:57.923 02:26:57 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:57.923 02:26:57 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:57.923 02:26:57 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:57.923 02:26:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:57.923 02:26:57 -- common/autotest_common.sh@10 -- # set +x 00:24:57.923 02:26:57 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:57.923 02:26:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:57.923 02:26:57 -- common/autotest_common.sh@10 -- # set +x 00:24:57.923 02:26:57 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:57.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:57.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:57.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:57.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:57.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:57.923 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:57.923 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:57.923 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:57.923 ' 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:03.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:03.191 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:03.191 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:03.191 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:03.191 02:27:02 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:03.191 02:27:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.191 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:25:03.191 02:27:02 -- spdkcli/nvmf.sh@90 -- # killprocess 100190 00:25:03.191 02:27:02 -- common/autotest_common.sh@926 -- # '[' -z 100190 ']' 00:25:03.191 02:27:02 -- common/autotest_common.sh@930 -- # kill -0 100190 00:25:03.191 02:27:02 -- common/autotest_common.sh@931 -- # uname 00:25:03.191 02:27:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:03.191 02:27:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100190 00:25:03.449 02:27:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:03.449 02:27:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:03.449 killing process with pid 100190 00:25:03.449 02:27:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100190' 00:25:03.449 02:27:02 -- common/autotest_common.sh@945 -- # kill 100190 00:25:03.449 [2024-07-15 02:27:02.757007] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:03.449 02:27:02 -- common/autotest_common.sh@950 -- # wait 100190 00:25:03.449 02:27:02 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:03.450 02:27:02 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:03.450 02:27:02 -- spdkcli/common.sh@13 -- # '[' -n 100190 ']' 00:25:03.450 02:27:02 -- spdkcli/common.sh@14 -- # killprocess 100190 00:25:03.450 02:27:02 -- common/autotest_common.sh@926 -- # '[' -z 100190 ']' 00:25:03.450 02:27:02 -- common/autotest_common.sh@930 -- # kill -0 100190 00:25:03.450 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100190) - No such process 00:25:03.450 Process with pid 100190 is not found 00:25:03.450 02:27:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100190 is not found' 00:25:03.450 02:27:02 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:03.450 02:27:02 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:03.450 02:27:02 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:03.450 00:25:03.450 real 0m17.335s 00:25:03.450 user 0m37.236s 00:25:03.450 sys 0m0.954s 00:25:03.450 02:27:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.450 ************************************ 00:25:03.450 END TEST spdkcli_nvmf_tcp 00:25:03.450 02:27:02 -- common/autotest_common.sh@10 -- # set +x 00:25:03.450 ************************************ 00:25:03.708 02:27:03 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:03.708 02:27:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:03.708 02:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.708 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:03.708 ************************************ 00:25:03.708 START TEST nvmf_identify_passthru 00:25:03.708 ************************************ 00:25:03.708 02:27:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:03.708 * Looking for test storage... 00:25:03.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:03.708 02:27:03 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.708 02:27:03 -- nvmf/common.sh@7 -- # uname -s 00:25:03.708 02:27:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.708 02:27:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.708 02:27:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.708 02:27:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.708 02:27:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.708 02:27:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.708 02:27:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.708 02:27:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.708 02:27:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.708 02:27:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.708 02:27:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:25:03.708 02:27:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:25:03.709 02:27:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.709 02:27:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.709 02:27:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.709 02:27:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.709 02:27:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.709 02:27:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.709 02:27:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.709 02:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@5 -- # export PATH 00:25:03.709 02:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- nvmf/common.sh@46 -- # : 0 00:25:03.709 02:27:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:03.709 02:27:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:03.709 02:27:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:03.709 02:27:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.709 02:27:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.709 02:27:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:03.709 02:27:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:03.709 02:27:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:03.709 02:27:03 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.709 02:27:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.709 02:27:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.709 02:27:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.709 02:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- paths/export.sh@5 -- # export PATH 00:25:03.709 02:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.709 02:27:03 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:03.709 02:27:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:03.709 02:27:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.709 02:27:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.709 02:27:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.709 02:27:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.709 02:27:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.709 02:27:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:03.709 02:27:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.709 02:27:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:03.709 02:27:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:03.709 02:27:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:03.709 02:27:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:03.709 02:27:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:03.709 02:27:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:03.709 02:27:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.709 02:27:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.709 02:27:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:03.709 02:27:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:03.709 02:27:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.709 02:27:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.709 02:27:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.709 02:27:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.709 02:27:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.709 02:27:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.709 02:27:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.709 02:27:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.709 02:27:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:03.709 02:27:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:03.709 Cannot find device "nvmf_tgt_br" 00:25:03.709 02:27:03 -- nvmf/common.sh@154 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.709 Cannot find device "nvmf_tgt_br2" 00:25:03.709 02:27:03 -- nvmf/common.sh@155 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:03.709 02:27:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:03.709 Cannot find device "nvmf_tgt_br" 00:25:03.709 02:27:03 -- nvmf/common.sh@157 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:03.709 Cannot find device "nvmf_tgt_br2" 00:25:03.709 02:27:03 -- nvmf/common.sh@158 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:03.709 02:27:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:03.709 02:27:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.709 02:27:03 -- nvmf/common.sh@161 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.709 02:27:03 -- nvmf/common.sh@162 -- # true 00:25:03.709 02:27:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:03.709 02:27:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:03.968 02:27:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:03.968 02:27:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:03.968 02:27:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:03.968 02:27:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:03.968 02:27:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:03.968 02:27:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:03.968 02:27:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:03.968 02:27:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:03.968 02:27:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:03.968 02:27:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:03.968 02:27:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:03.968 02:27:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:03.968 02:27:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:03.968 02:27:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:03.968 02:27:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:03.968 02:27:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:03.968 02:27:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:03.968 02:27:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:03.968 02:27:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:03.968 02:27:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:03.968 02:27:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:03.968 02:27:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:03.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:25:03.968 00:25:03.968 --- 10.0.0.2 ping statistics --- 00:25:03.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.968 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:03.968 02:27:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:03.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:03.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:25:03.968 00:25:03.968 --- 10.0.0.3 ping statistics --- 00:25:03.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.968 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:03.968 02:27:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:03.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:03.968 00:25:03.968 --- 10.0.0.1 ping statistics --- 00:25:03.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.968 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:03.968 02:27:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.968 02:27:03 -- nvmf/common.sh@421 -- # return 0 00:25:03.968 02:27:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.968 02:27:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.968 02:27:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.968 02:27:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.968 02:27:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.968 02:27:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.968 02:27:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.968 02:27:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:03.968 02:27:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.968 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:03.968 02:27:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:03.968 02:27:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:03.968 02:27:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:03.968 02:27:03 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:03.968 02:27:03 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:03.968 02:27:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:03.968 02:27:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:03.968 02:27:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:03.968 02:27:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:03.968 02:27:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:03.968 02:27:03 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:03.968 02:27:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:03.968 02:27:03 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:03.968 02:27:03 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:03.968 02:27:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:03.968 02:27:03 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:03.968 02:27:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:03.968 02:27:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:04.226 02:27:03 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:04.226 02:27:03 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:04.226 02:27:03 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:04.226 02:27:03 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:04.484 02:27:03 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:04.484 02:27:03 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:04.484 02:27:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:04.484 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:04.484 02:27:03 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:04.484 02:27:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:04.484 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:04.484 02:27:03 -- target/identify_passthru.sh@31 -- # nvmfpid=100688 00:25:04.484 02:27:03 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:04.484 02:27:03 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.484 02:27:03 -- target/identify_passthru.sh@35 -- # waitforlisten 100688 00:25:04.484 02:27:03 -- common/autotest_common.sh@819 -- # '[' -z 100688 ']' 00:25:04.484 02:27:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.484 02:27:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:04.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.484 02:27:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.484 02:27:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:04.484 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:04.484 [2024-07-15 02:27:03.966069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:04.484 [2024-07-15 02:27:03.966186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.742 [2024-07-15 02:27:04.105898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.742 [2024-07-15 02:27:04.192865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:04.742 [2024-07-15 02:27:04.193051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.742 [2024-07-15 02:27:04.193065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.742 [2024-07-15 02:27:04.193073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.742 [2024-07-15 02:27:04.193227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.742 [2024-07-15 02:27:04.193386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.742 [2024-07-15 02:27:04.193888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.742 [2024-07-15 02:27:04.193921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.678 02:27:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.678 02:27:04 -- common/autotest_common.sh@852 -- # return 0 00:25:05.678 02:27:04 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:05.678 02:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 02:27:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:04 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:05.678 02:27:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 [2024-07-15 02:27:05.050135] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 [2024-07-15 02:27:05.064053] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:05.678 02:27:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 02:27:05 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 Nvme0n1 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 [2024-07-15 02:27:05.205409] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:05.678 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.678 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:05.678 [2024-07-15 02:27:05.213199] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:05.678 [ 00:25:05.678 { 00:25:05.678 "allow_any_host": true, 00:25:05.678 "hosts": [], 00:25:05.678 "listen_addresses": [], 00:25:05.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:05.678 "subtype": "Discovery" 00:25:05.678 }, 00:25:05.678 { 00:25:05.678 "allow_any_host": true, 00:25:05.678 "hosts": [], 00:25:05.678 "listen_addresses": [ 00:25:05.678 { 00:25:05.678 "adrfam": "IPv4", 00:25:05.678 "traddr": "10.0.0.2", 00:25:05.678 "transport": "TCP", 00:25:05.678 "trsvcid": "4420", 00:25:05.678 "trtype": "TCP" 00:25:05.678 } 00:25:05.678 ], 00:25:05.678 "max_cntlid": 65519, 00:25:05.678 "max_namespaces": 1, 00:25:05.678 "min_cntlid": 1, 00:25:05.678 "model_number": "SPDK bdev Controller", 00:25:05.678 "namespaces": [ 00:25:05.678 { 00:25:05.678 "bdev_name": "Nvme0n1", 00:25:05.678 "name": "Nvme0n1", 00:25:05.678 "nguid": "13656AED120C452C9C3A2246BAFE5440", 00:25:05.678 "nsid": 1, 00:25:05.678 "uuid": "13656aed-120c-452c-9c3a-2246bafe5440" 00:25:05.678 } 00:25:05.678 ], 00:25:05.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.678 "serial_number": "SPDK00000000000001", 00:25:05.678 "subtype": "NVMe" 00:25:05.678 } 00:25:05.678 ] 00:25:05.678 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.678 02:27:05 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:05.678 02:27:05 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:05.678 02:27:05 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:05.937 02:27:05 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:05.938 02:27:05 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:05.938 02:27:05 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:05.938 02:27:05 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:06.197 02:27:05 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:06.197 02:27:05 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:06.197 02:27:05 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:06.197 02:27:05 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.197 02:27:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.197 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:06.197 02:27:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.197 02:27:05 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:06.197 02:27:05 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:06.197 02:27:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.197 02:27:05 -- nvmf/common.sh@116 -- # sync 00:25:06.197 02:27:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:06.197 02:27:05 -- nvmf/common.sh@119 -- # set +e 00:25:06.197 02:27:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.197 02:27:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:06.197 rmmod nvme_tcp 00:25:06.197 rmmod nvme_fabrics 00:25:06.197 rmmod nvme_keyring 00:25:06.456 02:27:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.456 02:27:05 -- nvmf/common.sh@123 -- # set -e 00:25:06.456 02:27:05 -- nvmf/common.sh@124 -- # return 0 00:25:06.456 02:27:05 -- nvmf/common.sh@477 -- # '[' -n 100688 ']' 00:25:06.456 02:27:05 -- nvmf/common.sh@478 -- # killprocess 100688 00:25:06.456 02:27:05 -- common/autotest_common.sh@926 -- # '[' -z 100688 ']' 00:25:06.456 02:27:05 -- common/autotest_common.sh@930 -- # kill -0 100688 00:25:06.456 02:27:05 -- common/autotest_common.sh@931 -- # uname 00:25:06.456 02:27:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.456 02:27:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100688 00:25:06.456 02:27:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:06.456 02:27:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:06.456 killing process with pid 100688 00:25:06.456 02:27:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100688' 00:25:06.456 02:27:05 -- common/autotest_common.sh@945 -- # kill 100688 00:25:06.456 [2024-07-15 02:27:05.787889] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:06.456 02:27:05 -- common/autotest_common.sh@950 -- # wait 100688 00:25:06.715 02:27:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:06.715 02:27:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:06.715 02:27:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:06.715 02:27:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.715 02:27:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:06.715 02:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.715 02:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:06.715 02:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.715 02:27:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:06.715 00:25:06.715 real 0m3.037s 00:25:06.715 user 0m7.609s 00:25:06.715 sys 0m0.822s 00:25:06.715 02:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.715 ************************************ 00:25:06.715 END TEST nvmf_identify_passthru 00:25:06.715 ************************************ 00:25:06.715 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.715 02:27:06 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:06.715 02:27:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:06.715 02:27:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:06.715 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:25:06.715 ************************************ 00:25:06.715 START TEST nvmf_dif 00:25:06.715 ************************************ 00:25:06.715 02:27:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:06.715 * Looking for test storage... 00:25:06.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:06.715 02:27:06 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:06.716 02:27:06 -- nvmf/common.sh@7 -- # uname -s 00:25:06.716 02:27:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.716 02:27:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.716 02:27:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.716 02:27:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.716 02:27:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.716 02:27:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.716 02:27:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.716 02:27:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.716 02:27:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.716 02:27:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:25:06.716 02:27:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:25:06.716 02:27:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.716 02:27:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.716 02:27:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:06.716 02:27:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:06.716 02:27:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.716 02:27:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.716 02:27:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.716 02:27:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.716 02:27:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.716 02:27:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.716 02:27:06 -- paths/export.sh@5 -- # export PATH 00:25:06.716 02:27:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.716 02:27:06 -- nvmf/common.sh@46 -- # : 0 00:25:06.716 02:27:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:06.716 02:27:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:06.716 02:27:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:06.716 02:27:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.716 02:27:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.716 02:27:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:06.716 02:27:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:06.716 02:27:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:06.716 02:27:06 -- target/dif.sh@15 -- # NULL_META=16 00:25:06.716 02:27:06 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:06.716 02:27:06 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:06.716 02:27:06 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:06.716 02:27:06 -- target/dif.sh@135 -- # nvmftestinit 00:25:06.716 02:27:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:06.716 02:27:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.716 02:27:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:06.716 02:27:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:06.716 02:27:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:06.716 02:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.716 02:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:06.716 02:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.716 02:27:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:06.716 02:27:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:06.716 02:27:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.716 02:27:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.716 02:27:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:06.716 02:27:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:06.716 02:27:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:06.716 02:27:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:06.716 02:27:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:06.716 02:27:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.716 02:27:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:06.716 02:27:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:06.716 02:27:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:06.716 02:27:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:06.716 02:27:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:06.716 02:27:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:06.716 Cannot find device "nvmf_tgt_br" 00:25:06.716 02:27:06 -- nvmf/common.sh@154 -- # true 00:25:06.716 02:27:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:06.716 Cannot find device "nvmf_tgt_br2" 00:25:06.716 02:27:06 -- nvmf/common.sh@155 -- # true 00:25:06.716 02:27:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:06.716 02:27:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:06.716 Cannot find device "nvmf_tgt_br" 00:25:06.716 02:27:06 -- nvmf/common.sh@157 -- # true 00:25:06.716 02:27:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:06.716 Cannot find device "nvmf_tgt_br2" 00:25:06.716 02:27:06 -- nvmf/common.sh@158 -- # true 00:25:06.716 02:27:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:06.975 02:27:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:06.975 02:27:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:06.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:06.975 02:27:06 -- nvmf/common.sh@161 -- # true 00:25:06.975 02:27:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:06.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:06.975 02:27:06 -- nvmf/common.sh@162 -- # true 00:25:06.975 02:27:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:06.975 02:27:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:06.975 02:27:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:06.975 02:27:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:06.975 02:27:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:06.975 02:27:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:06.975 02:27:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:06.975 02:27:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:06.975 02:27:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:06.975 02:27:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:06.975 02:27:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:06.975 02:27:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:06.975 02:27:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:06.975 02:27:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:06.975 02:27:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:06.975 02:27:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:06.975 02:27:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:06.975 02:27:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:06.975 02:27:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:06.975 02:27:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:06.975 02:27:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:06.975 02:27:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:06.975 02:27:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:06.975 02:27:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:06.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:25:06.976 00:25:06.976 --- 10.0.0.2 ping statistics --- 00:25:06.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.976 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:06.976 02:27:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:07.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:07.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:25:07.235 00:25:07.235 --- 10.0.0.3 ping statistics --- 00:25:07.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.235 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:07.235 02:27:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:07.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:07.235 00:25:07.235 --- 10.0.0.1 ping statistics --- 00:25:07.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.235 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:07.235 02:27:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.235 02:27:06 -- nvmf/common.sh@421 -- # return 0 00:25:07.235 02:27:06 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:07.235 02:27:06 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:07.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:07.494 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:07.494 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:07.494 02:27:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.494 02:27:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:07.494 02:27:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:07.494 02:27:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.494 02:27:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:07.494 02:27:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:07.494 02:27:06 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:07.494 02:27:06 -- target/dif.sh@137 -- # nvmfappstart 00:25:07.494 02:27:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:07.494 02:27:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:07.495 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:25:07.495 02:27:06 -- nvmf/common.sh@469 -- # nvmfpid=101041 00:25:07.495 02:27:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:07.495 02:27:06 -- nvmf/common.sh@470 -- # waitforlisten 101041 00:25:07.495 02:27:06 -- common/autotest_common.sh@819 -- # '[' -z 101041 ']' 00:25:07.495 02:27:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.495 02:27:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.495 02:27:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.495 02:27:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.495 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:25:07.495 [2024-07-15 02:27:06.999143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:07.495 [2024-07-15 02:27:06.999263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.754 [2024-07-15 02:27:07.137487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.754 [2024-07-15 02:27:07.233714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:07.754 [2024-07-15 02:27:07.233889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.754 [2024-07-15 02:27:07.233905] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.754 [2024-07-15 02:27:07.233916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.754 [2024-07-15 02:27:07.233944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.689 02:27:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:08.689 02:27:07 -- common/autotest_common.sh@852 -- # return 0 00:25:08.689 02:27:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:08.689 02:27:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:08.689 02:27:07 -- common/autotest_common.sh@10 -- # set +x 00:25:08.689 02:27:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.689 02:27:08 -- target/dif.sh@139 -- # create_transport 00:25:08.689 02:27:08 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:08.689 02:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.689 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.689 [2024-07-15 02:27:08.044593] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.689 02:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.689 02:27:08 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:08.689 02:27:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:08.689 02:27:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:08.689 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.689 ************************************ 00:25:08.689 START TEST fio_dif_1_default 00:25:08.689 ************************************ 00:25:08.689 02:27:08 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:08.689 02:27:08 -- target/dif.sh@86 -- # create_subsystems 0 00:25:08.689 02:27:08 -- target/dif.sh@28 -- # local sub 00:25:08.689 02:27:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:08.689 02:27:08 -- target/dif.sh@31 -- # create_subsystem 0 00:25:08.689 02:27:08 -- target/dif.sh@18 -- # local sub_id=0 00:25:08.690 02:27:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:08.690 02:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.690 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 bdev_null0 00:25:08.690 02:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.690 02:27:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:08.690 02:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.690 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 02:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.690 02:27:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:08.690 02:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.690 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 02:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.690 02:27:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:08.690 02:27:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.690 02:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:08.690 [2024-07-15 02:27:08.088724] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.690 02:27:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.690 02:27:08 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:08.690 02:27:08 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:08.690 02:27:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:08.690 02:27:08 -- nvmf/common.sh@520 -- # config=() 00:25:08.690 02:27:08 -- target/dif.sh@82 -- # gen_fio_conf 00:25:08.690 02:27:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.690 02:27:08 -- nvmf/common.sh@520 -- # local subsystem config 00:25:08.690 02:27:08 -- target/dif.sh@54 -- # local file 00:25:08.690 02:27:08 -- target/dif.sh@56 -- # cat 00:25:08.690 02:27:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:08.690 02:27:08 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.690 02:27:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:08.690 { 00:25:08.690 "params": { 00:25:08.690 "name": "Nvme$subsystem", 00:25:08.690 "trtype": "$TEST_TRANSPORT", 00:25:08.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:08.690 "adrfam": "ipv4", 00:25:08.690 "trsvcid": "$NVMF_PORT", 00:25:08.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:08.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:08.690 "hdgst": ${hdgst:-false}, 00:25:08.690 "ddgst": ${ddgst:-false} 00:25:08.690 }, 00:25:08.690 "method": "bdev_nvme_attach_controller" 00:25:08.690 } 00:25:08.690 EOF 00:25:08.690 )") 00:25:08.690 02:27:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:08.690 02:27:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:08.690 02:27:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:08.690 02:27:08 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.690 02:27:08 -- common/autotest_common.sh@1320 -- # shift 00:25:08.690 02:27:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:08.690 02:27:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.690 02:27:08 -- nvmf/common.sh@542 -- # cat 00:25:08.690 02:27:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:08.690 02:27:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:08.690 02:27:08 -- nvmf/common.sh@544 -- # jq . 00:25:08.690 02:27:08 -- nvmf/common.sh@545 -- # IFS=, 00:25:08.690 02:27:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:08.690 "params": { 00:25:08.690 "name": "Nvme0", 00:25:08.690 "trtype": "tcp", 00:25:08.690 "traddr": "10.0.0.2", 00:25:08.690 "adrfam": "ipv4", 00:25:08.690 "trsvcid": "4420", 00:25:08.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:08.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:08.690 "hdgst": false, 00:25:08.690 "ddgst": false 00:25:08.690 }, 00:25:08.690 "method": "bdev_nvme_attach_controller" 00:25:08.690 }' 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:08.690 02:27:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:08.690 02:27:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:08.690 02:27:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:08.690 02:27:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:08.690 02:27:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:08.690 02:27:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:08.949 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:08.949 fio-3.35 00:25:08.949 Starting 1 thread 00:25:09.207 [2024-07-15 02:27:08.711851] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:09.207 [2024-07-15 02:27:08.711960] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:21.423 00:25:21.423 filename0: (groupid=0, jobs=1): err= 0: pid=101125: Mon Jul 15 02:27:18 2024 00:25:21.423 read: IOPS=1239, BW=4957KiB/s (5076kB/s)(48.5MiB/10015msec) 00:25:21.423 slat (nsec): min=6145, max=49789, avg=8181.73, stdev=3516.66 00:25:21.423 clat (usec): min=359, max=41554, avg=3202.87, stdev=10207.12 00:25:21.423 lat (usec): min=365, max=41564, avg=3211.05, stdev=10207.16 00:25:21.423 clat percentiles (usec): 00:25:21.423 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 404], 00:25:21.423 | 30.00th=[ 416], 40.00th=[ 424], 50.00th=[ 437], 60.00th=[ 449], 00:25:21.423 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[40633], 00:25:21.423 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:21.423 | 99.99th=[41681] 00:25:21.423 bw ( KiB/s): min= 2592, max= 7520, per=100.00%, avg=4962.75, stdev=1306.26, samples=20 00:25:21.423 iops : min= 648, max= 1880, avg=1240.65, stdev=326.58, samples=20 00:25:21.423 lat (usec) : 500=86.80%, 750=6.34% 00:25:21.423 lat (msec) : 10=0.03%, 50=6.83% 00:25:21.423 cpu : usr=92.15%, sys=7.12%, ctx=23, majf=0, minf=8 00:25:21.423 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:21.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.423 issued rwts: total=12412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.423 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:21.423 00:25:21.423 Run status group 0 (all jobs): 00:25:21.423 READ: bw=4957KiB/s (5076kB/s), 4957KiB/s-4957KiB/s (5076kB/s-5076kB/s), io=48.5MiB (50.8MB), run=10015-10015msec 00:25:21.423 02:27:19 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:21.423 02:27:19 -- target/dif.sh@43 -- # local sub 00:25:21.423 02:27:19 -- target/dif.sh@45 -- # for sub in "$@" 00:25:21.423 02:27:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:21.423 02:27:19 -- target/dif.sh@36 -- # local sub_id=0 00:25:21.423 02:27:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 00:25:21.423 real 0m11.006s 00:25:21.423 user 0m9.870s 00:25:21.423 sys 0m0.976s 00:25:21.423 02:27:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 ************************************ 00:25:21.423 END TEST fio_dif_1_default 00:25:21.423 ************************************ 00:25:21.423 02:27:19 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:21.423 02:27:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:21.423 02:27:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 ************************************ 00:25:21.423 START TEST fio_dif_1_multi_subsystems 00:25:21.423 ************************************ 00:25:21.423 02:27:19 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:21.423 02:27:19 -- target/dif.sh@92 -- # local files=1 00:25:21.423 02:27:19 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:21.423 02:27:19 -- target/dif.sh@28 -- # local sub 00:25:21.423 02:27:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.423 02:27:19 -- target/dif.sh@31 -- # create_subsystem 0 00:25:21.423 02:27:19 -- target/dif.sh@18 -- # local sub_id=0 00:25:21.423 02:27:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 bdev_null0 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 [2024-07-15 02:27:19.152818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:21.423 02:27:19 -- target/dif.sh@31 -- # create_subsystem 1 00:25:21.423 02:27:19 -- target/dif.sh@18 -- # local sub_id=1 00:25:21.423 02:27:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 bdev_null1 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.423 02:27:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.423 02:27:19 -- common/autotest_common.sh@10 -- # set +x 00:25:21.423 02:27:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.423 02:27:19 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:21.423 02:27:19 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:21.423 02:27:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:21.423 02:27:19 -- nvmf/common.sh@520 -- # config=() 00:25:21.423 02:27:19 -- nvmf/common.sh@520 -- # local subsystem config 00:25:21.423 02:27:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.423 02:27:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.423 { 00:25:21.423 "params": { 00:25:21.423 "name": "Nvme$subsystem", 00:25:21.423 "trtype": "$TEST_TRANSPORT", 00:25:21.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.423 "adrfam": "ipv4", 00:25:21.423 "trsvcid": "$NVMF_PORT", 00:25:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.423 "hdgst": ${hdgst:-false}, 00:25:21.423 "ddgst": ${ddgst:-false} 00:25:21.423 }, 00:25:21.423 "method": "bdev_nvme_attach_controller" 00:25:21.423 } 00:25:21.423 EOF 00:25:21.423 )") 00:25:21.423 02:27:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.423 02:27:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:21.423 02:27:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.423 02:27:19 -- target/dif.sh@54 -- # local file 00:25:21.423 02:27:19 -- target/dif.sh@56 -- # cat 00:25:21.423 02:27:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:21.423 02:27:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:21.423 02:27:19 -- nvmf/common.sh@542 -- # cat 00:25:21.423 02:27:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:21.423 02:27:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.423 02:27:19 -- common/autotest_common.sh@1320 -- # shift 00:25:21.423 02:27:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:21.423 02:27:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:21.423 02:27:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.423 02:27:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.423 { 00:25:21.423 "params": { 00:25:21.423 "name": "Nvme$subsystem", 00:25:21.423 "trtype": "$TEST_TRANSPORT", 00:25:21.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.423 "adrfam": "ipv4", 00:25:21.423 "trsvcid": "$NVMF_PORT", 00:25:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.423 "hdgst": ${hdgst:-false}, 00:25:21.423 "ddgst": ${ddgst:-false} 00:25:21.423 }, 00:25:21.423 "method": "bdev_nvme_attach_controller" 00:25:21.423 } 00:25:21.423 EOF 00:25:21.423 )") 00:25:21.423 02:27:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:21.423 02:27:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.423 02:27:19 -- target/dif.sh@73 -- # cat 00:25:21.423 02:27:19 -- nvmf/common.sh@542 -- # cat 00:25:21.423 02:27:19 -- nvmf/common.sh@544 -- # jq . 00:25:21.423 02:27:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:21.423 02:27:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:21.423 02:27:19 -- nvmf/common.sh@545 -- # IFS=, 00:25:21.423 02:27:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:21.423 "params": { 00:25:21.423 "name": "Nvme0", 00:25:21.423 "trtype": "tcp", 00:25:21.423 "traddr": "10.0.0.2", 00:25:21.423 "adrfam": "ipv4", 00:25:21.423 "trsvcid": "4420", 00:25:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:21.423 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:21.423 "hdgst": false, 00:25:21.423 "ddgst": false 00:25:21.423 }, 00:25:21.423 "method": "bdev_nvme_attach_controller" 00:25:21.423 },{ 00:25:21.423 "params": { 00:25:21.423 "name": "Nvme1", 00:25:21.423 "trtype": "tcp", 00:25:21.423 "traddr": "10.0.0.2", 00:25:21.423 "adrfam": "ipv4", 00:25:21.423 "trsvcid": "4420", 00:25:21.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.423 "hdgst": false, 00:25:21.423 "ddgst": false 00:25:21.423 }, 00:25:21.423 "method": "bdev_nvme_attach_controller" 00:25:21.423 }' 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:21.423 02:27:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:21.423 02:27:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:21.423 02:27:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:21.423 02:27:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:21.423 02:27:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:21.423 02:27:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:21.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:21.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:21.424 fio-3.35 00:25:21.424 Starting 2 threads 00:25:21.424 [2024-07-15 02:27:19.927840] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:21.424 [2024-07-15 02:27:19.927906] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:31.414 00:25:31.414 filename0: (groupid=0, jobs=1): err= 0: pid=101285: Mon Jul 15 02:27:30 2024 00:25:31.414 read: IOPS=144, BW=577KiB/s (591kB/s)(5776KiB/10009msec) 00:25:31.414 slat (nsec): min=6432, max=51999, avg=10850.19, stdev=7388.39 00:25:31.414 clat (usec): min=387, max=44326, avg=27688.82, stdev=19055.19 00:25:31.414 lat (usec): min=393, max=44338, avg=27699.67, stdev=19054.74 00:25:31.414 clat percentiles (usec): 00:25:31.414 | 1.00th=[ 412], 5.00th=[ 441], 10.00th=[ 465], 20.00th=[ 510], 00:25:31.414 | 30.00th=[ 709], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:25:31.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:31.414 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:25:31.414 | 99.99th=[44303] 00:25:31.414 bw ( KiB/s): min= 448, max= 672, per=41.70%, avg=576.05, stdev=66.53, samples=20 00:25:31.414 iops : min= 112, max= 168, avg=144.00, stdev=16.62, samples=20 00:25:31.414 lat (usec) : 500=17.87%, 750=12.60%, 1000=2.22% 00:25:31.414 lat (msec) : 2=0.28%, 50=67.04% 00:25:31.414 cpu : usr=95.82%, sys=3.65%, ctx=14, majf=0, minf=0 00:25:31.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.414 issued rwts: total=1444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:31.414 filename1: (groupid=0, jobs=1): err= 0: pid=101286: Mon Jul 15 02:27:30 2024 00:25:31.414 read: IOPS=201, BW=804KiB/s (824kB/s)(8048KiB/10005msec) 00:25:31.414 slat (nsec): min=6240, max=51308, avg=9082.01, stdev=4550.65 00:25:31.414 clat (usec): min=362, max=42363, avg=19862.83, stdev=20218.45 00:25:31.414 lat (usec): min=368, max=42388, avg=19871.91, stdev=20218.23 00:25:31.414 clat percentiles (usec): 00:25:31.414 | 1.00th=[ 388], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 437], 00:25:31.414 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 783], 60.00th=[40633], 00:25:31.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:31.414 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:31.414 | 99.99th=[42206] 00:25:31.414 bw ( KiB/s): min= 480, max= 1088, per=58.50%, avg=808.42, stdev=161.36, samples=19 00:25:31.414 iops : min= 120, max= 272, avg=202.11, stdev=40.34, samples=19 00:25:31.414 lat (usec) : 500=44.88%, 750=4.22%, 1000=2.78% 00:25:31.414 lat (msec) : 2=0.20%, 50=47.91% 00:25:31.414 cpu : usr=94.12%, sys=5.50%, ctx=6, majf=0, minf=9 00:25:31.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.414 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:31.414 00:25:31.414 Run status group 0 (all jobs): 00:25:31.414 READ: bw=1381KiB/s (1414kB/s), 577KiB/s-804KiB/s (591kB/s-824kB/s), io=13.5MiB (14.2MB), run=10005-10009msec 00:25:31.414 02:27:30 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:31.414 02:27:30 -- target/dif.sh@43 -- # local sub 00:25:31.414 02:27:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.414 02:27:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:31.414 02:27:30 -- target/dif.sh@36 -- # local sub_id=0 00:25:31.414 02:27:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@45 -- # for sub in "$@" 00:25:31.415 02:27:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:31.415 02:27:30 -- target/dif.sh@36 -- # local sub_id=1 00:25:31.415 02:27:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 ************************************ 00:25:31.415 END TEST fio_dif_1_multi_subsystems 00:25:31.415 ************************************ 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 00:25:31.415 real 0m11.186s 00:25:31.415 user 0m19.812s 00:25:31.415 sys 0m1.194s 00:25:31.415 02:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:31.415 02:27:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:31.415 02:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 ************************************ 00:25:31.415 START TEST fio_dif_rand_params 00:25:31.415 ************************************ 00:25:31.415 02:27:30 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:31.415 02:27:30 -- target/dif.sh@100 -- # local NULL_DIF 00:25:31.415 02:27:30 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:31.415 02:27:30 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:31.415 02:27:30 -- target/dif.sh@103 -- # bs=128k 00:25:31.415 02:27:30 -- target/dif.sh@103 -- # numjobs=3 00:25:31.415 02:27:30 -- target/dif.sh@103 -- # iodepth=3 00:25:31.415 02:27:30 -- target/dif.sh@103 -- # runtime=5 00:25:31.415 02:27:30 -- target/dif.sh@105 -- # create_subsystems 0 00:25:31.415 02:27:30 -- target/dif.sh@28 -- # local sub 00:25:31.415 02:27:30 -- target/dif.sh@30 -- # for sub in "$@" 00:25:31.415 02:27:30 -- target/dif.sh@31 -- # create_subsystem 0 00:25:31.415 02:27:30 -- target/dif.sh@18 -- # local sub_id=0 00:25:31.415 02:27:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 bdev_null0 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:31.415 02:27:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.415 02:27:30 -- common/autotest_common.sh@10 -- # set +x 00:25:31.415 [2024-07-15 02:27:30.398465] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.415 02:27:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.415 02:27:30 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:31.415 02:27:30 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:31.415 02:27:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:31.415 02:27:30 -- nvmf/common.sh@520 -- # config=() 00:25:31.415 02:27:30 -- nvmf/common.sh@520 -- # local subsystem config 00:25:31.415 02:27:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.415 02:27:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.415 02:27:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.415 { 00:25:31.415 "params": { 00:25:31.415 "name": "Nvme$subsystem", 00:25:31.415 "trtype": "$TEST_TRANSPORT", 00:25:31.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.415 "adrfam": "ipv4", 00:25:31.415 "trsvcid": "$NVMF_PORT", 00:25:31.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.415 "hdgst": ${hdgst:-false}, 00:25:31.415 "ddgst": ${ddgst:-false} 00:25:31.415 }, 00:25:31.415 "method": "bdev_nvme_attach_controller" 00:25:31.415 } 00:25:31.415 EOF 00:25:31.415 )") 00:25:31.415 02:27:30 -- target/dif.sh@82 -- # gen_fio_conf 00:25:31.415 02:27:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.415 02:27:30 -- target/dif.sh@54 -- # local file 00:25:31.415 02:27:30 -- target/dif.sh@56 -- # cat 00:25:31.415 02:27:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:31.415 02:27:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:31.415 02:27:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:31.415 02:27:30 -- nvmf/common.sh@542 -- # cat 00:25:31.415 02:27:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.415 02:27:30 -- common/autotest_common.sh@1320 -- # shift 00:25:31.415 02:27:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:31.415 02:27:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:31.415 02:27:30 -- nvmf/common.sh@544 -- # jq . 00:25:31.415 02:27:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:31.415 02:27:30 -- target/dif.sh@72 -- # (( file <= files )) 00:25:31.415 02:27:30 -- nvmf/common.sh@545 -- # IFS=, 00:25:31.415 02:27:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:31.415 "params": { 00:25:31.415 "name": "Nvme0", 00:25:31.415 "trtype": "tcp", 00:25:31.415 "traddr": "10.0.0.2", 00:25:31.415 "adrfam": "ipv4", 00:25:31.415 "trsvcid": "4420", 00:25:31.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.415 "hdgst": false, 00:25:31.415 "ddgst": false 00:25:31.415 }, 00:25:31.415 "method": "bdev_nvme_attach_controller" 00:25:31.415 }' 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:31.415 02:27:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:31.415 02:27:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:31.415 02:27:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:31.415 02:27:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:31.415 02:27:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:31.415 02:27:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:31.415 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:31.415 ... 00:25:31.415 fio-3.35 00:25:31.415 Starting 3 threads 00:25:31.725 [2024-07-15 02:27:31.027639] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:31.725 [2024-07-15 02:27:31.027715] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:36.993 00:25:36.993 filename0: (groupid=0, jobs=1): err= 0: pid=101443: Mon Jul 15 02:27:36 2024 00:25:36.993 read: IOPS=287, BW=35.9MiB/s (37.6MB/s)(180MiB/5003msec) 00:25:36.993 slat (nsec): min=6859, max=57170, avg=13511.56, stdev=4765.89 00:25:36.993 clat (usec): min=5893, max=52499, avg=10424.15, stdev=5374.44 00:25:36.993 lat (usec): min=5907, max=52511, avg=10437.67, stdev=5374.35 00:25:36.993 clat percentiles (usec): 00:25:36.993 | 1.00th=[ 6456], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 8979], 00:25:36.993 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10159], 00:25:36.993 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:25:36.993 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:25:36.993 | 99.99th=[52691] 00:25:36.993 bw ( KiB/s): min=29952, max=41216, per=36.41%, avg=36408.89, stdev=3746.66, samples=9 00:25:36.993 iops : min= 234, max= 322, avg=284.44, stdev=29.27, samples=9 00:25:36.993 lat (msec) : 10=51.22%, 20=47.11%, 50=0.49%, 100=1.18% 00:25:36.993 cpu : usr=91.40%, sys=6.66%, ctx=5, majf=0, minf=8 00:25:36.993 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 issued rwts: total=1437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.993 filename0: (groupid=0, jobs=1): err= 0: pid=101444: Mon Jul 15 02:27:36 2024 00:25:36.993 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5001msec) 00:25:36.993 slat (nsec): min=6462, max=55083, avg=9570.30, stdev=4311.93 00:25:36.993 clat (usec): min=4034, max=15896, avg=12571.21, stdev=2581.85 00:25:36.993 lat (usec): min=4045, max=15911, avg=12580.78, stdev=2581.66 00:25:36.993 clat percentiles (usec): 00:25:36.993 | 1.00th=[ 4146], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 9634], 00:25:36.993 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13435], 60.00th=[13829], 00:25:36.993 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15139], 00:25:36.993 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:25:36.993 | 99.99th=[15926] 00:25:36.993 bw ( KiB/s): min=27648, max=37632, per=30.55%, avg=30549.33, stdev=3042.53, samples=9 00:25:36.993 iops : min= 216, max= 294, avg=238.67, stdev=23.77, samples=9 00:25:36.993 lat (msec) : 10=20.65%, 20=79.35% 00:25:36.993 cpu : usr=92.76%, sys=6.02%, ctx=4, majf=0, minf=9 00:25:36.993 IO depths : 1=33.1%, 2=66.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.993 filename0: (groupid=0, jobs=1): err= 0: pid=101445: Mon Jul 15 02:27:36 2024 00:25:36.993 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5003msec) 00:25:36.993 slat (usec): min=6, max=165, avg=11.92, stdev= 7.67 00:25:36.993 clat (usec): min=3756, max=53932, avg=11704.27, stdev=6392.62 00:25:36.993 lat (usec): min=3767, max=53939, avg=11716.19, stdev=6393.00 00:25:36.993 clat percentiles (usec): 00:25:36.993 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 8979], 20.00th=[10159], 00:25:36.993 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:25:36.993 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12911], 00:25:36.993 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:25:36.993 | 99.99th=[53740] 00:25:36.993 bw ( KiB/s): min=28416, max=38656, per=33.23%, avg=33223.11, stdev=3410.13, samples=9 00:25:36.993 iops : min= 222, max= 302, avg=259.56, stdev=26.64, samples=9 00:25:36.993 lat (msec) : 4=0.08%, 10=18.52%, 20=79.06%, 50=0.31%, 100=2.03% 00:25:36.993 cpu : usr=92.50%, sys=5.84%, ctx=57, majf=0, minf=9 00:25:36.993 IO depths : 1=7.3%, 2=92.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.993 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.993 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.993 00:25:36.993 Run status group 0 (all jobs): 00:25:36.993 READ: bw=97.6MiB/s (102MB/s), 29.8MiB/s-35.9MiB/s (31.2MB/s-37.6MB/s), io=489MiB (512MB), run=5001-5003msec 00:25:36.993 02:27:36 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:36.993 02:27:36 -- target/dif.sh@43 -- # local sub 00:25:36.993 02:27:36 -- target/dif.sh@45 -- # for sub in "$@" 00:25:36.993 02:27:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:36.993 02:27:36 -- target/dif.sh@36 -- # local sub_id=0 00:25:36.993 02:27:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:36.993 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.993 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.993 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.993 02:27:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:36.993 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.993 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.993 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # bs=4k 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # numjobs=8 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # iodepth=16 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # runtime= 00:25:36.993 02:27:36 -- target/dif.sh@109 -- # files=2 00:25:36.993 02:27:36 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:36.993 02:27:36 -- target/dif.sh@28 -- # local sub 00:25:36.993 02:27:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.993 02:27:36 -- target/dif.sh@31 -- # create_subsystem 0 00:25:36.993 02:27:36 -- target/dif.sh@18 -- # local sub_id=0 00:25:36.993 02:27:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:36.993 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.993 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.993 bdev_null0 00:25:36.993 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.993 02:27:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:36.993 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.993 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.993 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.993 02:27:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:36.993 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 [2024-07-15 02:27:36.412510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.994 02:27:36 -- target/dif.sh@31 -- # create_subsystem 1 00:25:36.994 02:27:36 -- target/dif.sh@18 -- # local sub_id=1 00:25:36.994 02:27:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 bdev_null1 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.994 02:27:36 -- target/dif.sh@31 -- # create_subsystem 2 00:25:36.994 02:27:36 -- target/dif.sh@18 -- # local sub_id=2 00:25:36.994 02:27:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 bdev_null2 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:36.994 02:27:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.994 02:27:36 -- common/autotest_common.sh@10 -- # set +x 00:25:36.994 02:27:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.994 02:27:36 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:36.994 02:27:36 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:36.994 02:27:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:36.994 02:27:36 -- nvmf/common.sh@520 -- # config=() 00:25:36.994 02:27:36 -- nvmf/common.sh@520 -- # local subsystem config 00:25:36.994 02:27:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.994 02:27:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.994 { 00:25:36.994 "params": { 00:25:36.994 "name": "Nvme$subsystem", 00:25:36.994 "trtype": "$TEST_TRANSPORT", 00:25:36.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.994 "adrfam": "ipv4", 00:25:36.994 "trsvcid": "$NVMF_PORT", 00:25:36.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.994 "hdgst": ${hdgst:-false}, 00:25:36.994 "ddgst": ${ddgst:-false} 00:25:36.994 }, 00:25:36.994 "method": "bdev_nvme_attach_controller" 00:25:36.994 } 00:25:36.994 EOF 00:25:36.994 )") 00:25:36.994 02:27:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.994 02:27:36 -- target/dif.sh@82 -- # gen_fio_conf 00:25:36.994 02:27:36 -- target/dif.sh@54 -- # local file 00:25:36.994 02:27:36 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.994 02:27:36 -- target/dif.sh@56 -- # cat 00:25:36.994 02:27:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:36.994 02:27:36 -- nvmf/common.sh@542 -- # cat 00:25:36.994 02:27:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.994 02:27:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:36.994 02:27:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.994 02:27:36 -- common/autotest_common.sh@1320 -- # shift 00:25:36.994 02:27:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:36.994 02:27:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.994 02:27:36 -- target/dif.sh@73 -- # cat 00:25:36.994 02:27:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.994 02:27:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:36.994 02:27:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file++ )) 00:25:36.994 02:27:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.994 02:27:36 -- target/dif.sh@73 -- # cat 00:25:36.994 02:27:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.994 { 00:25:36.994 "params": { 00:25:36.994 "name": "Nvme$subsystem", 00:25:36.994 "trtype": "$TEST_TRANSPORT", 00:25:36.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.994 "adrfam": "ipv4", 00:25:36.994 "trsvcid": "$NVMF_PORT", 00:25:36.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.994 "hdgst": ${hdgst:-false}, 00:25:36.994 "ddgst": ${ddgst:-false} 00:25:36.994 }, 00:25:36.994 "method": "bdev_nvme_attach_controller" 00:25:36.994 } 00:25:36.994 EOF 00:25:36.994 )") 00:25:36.994 02:27:36 -- nvmf/common.sh@542 -- # cat 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file++ )) 00:25:36.994 02:27:36 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.994 02:27:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:36.994 02:27:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:36.994 { 00:25:36.994 "params": { 00:25:36.994 "name": "Nvme$subsystem", 00:25:36.994 "trtype": "$TEST_TRANSPORT", 00:25:36.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.994 "adrfam": "ipv4", 00:25:36.994 "trsvcid": "$NVMF_PORT", 00:25:36.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.995 "hdgst": ${hdgst:-false}, 00:25:36.995 "ddgst": ${ddgst:-false} 00:25:36.995 }, 00:25:36.995 "method": "bdev_nvme_attach_controller" 00:25:36.995 } 00:25:36.995 EOF 00:25:36.995 )") 00:25:36.995 02:27:36 -- nvmf/common.sh@542 -- # cat 00:25:36.995 02:27:36 -- nvmf/common.sh@544 -- # jq . 00:25:36.995 02:27:36 -- nvmf/common.sh@545 -- # IFS=, 00:25:36.995 02:27:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:36.995 "params": { 00:25:36.995 "name": "Nvme0", 00:25:36.995 "trtype": "tcp", 00:25:36.995 "traddr": "10.0.0.2", 00:25:36.995 "adrfam": "ipv4", 00:25:36.995 "trsvcid": "4420", 00:25:36.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:36.995 "hdgst": false, 00:25:36.995 "ddgst": false 00:25:36.995 }, 00:25:36.995 "method": "bdev_nvme_attach_controller" 00:25:36.995 },{ 00:25:36.995 "params": { 00:25:36.995 "name": "Nvme1", 00:25:36.995 "trtype": "tcp", 00:25:36.995 "traddr": "10.0.0.2", 00:25:36.995 "adrfam": "ipv4", 00:25:36.995 "trsvcid": "4420", 00:25:36.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.995 "hdgst": false, 00:25:36.995 "ddgst": false 00:25:36.995 }, 00:25:36.995 "method": "bdev_nvme_attach_controller" 00:25:36.995 },{ 00:25:36.995 "params": { 00:25:36.995 "name": "Nvme2", 00:25:36.995 "trtype": "tcp", 00:25:36.995 "traddr": "10.0.0.2", 00:25:36.995 "adrfam": "ipv4", 00:25:36.995 "trsvcid": "4420", 00:25:36.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:36.995 "hdgst": false, 00:25:36.995 "ddgst": false 00:25:36.995 }, 00:25:36.995 "method": "bdev_nvme_attach_controller" 00:25:36.995 }' 00:25:36.995 02:27:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:36.995 02:27:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:36.995 02:27:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.995 02:27:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:36.995 02:27:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:36.995 02:27:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:37.253 02:27:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:37.253 02:27:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:37.253 02:27:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:37.253 02:27:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:37.253 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.253 ... 00:25:37.253 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.253 ... 00:25:37.253 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:37.253 ... 00:25:37.253 fio-3.35 00:25:37.253 Starting 24 threads 00:25:37.819 [2024-07-15 02:27:37.310130] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:37.819 [2024-07-15 02:27:37.310204] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:50.017 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101546: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=218, BW=876KiB/s (897kB/s)(8764KiB/10005msec) 00:25:50.017 slat (usec): min=4, max=8059, avg=19.41, stdev=242.60 00:25:50.017 clat (msec): min=5, max=122, avg=72.89, stdev=18.78 00:25:50.017 lat (msec): min=5, max=122, avg=72.91, stdev=18.78 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:25:50.017 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:25:50.017 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 107], 00:25:50.017 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 124], 99.95th=[ 124], 00:25:50.017 | 99.99th=[ 124] 00:25:50.017 bw ( KiB/s): min= 768, max= 1072, per=3.71%, avg=872.11, stdev=110.77, samples=19 00:25:50.017 iops : min= 192, max= 268, avg=218.00, stdev=27.69, samples=19 00:25:50.017 lat (msec) : 10=0.32%, 20=0.73%, 50=10.50%, 100=80.51%, 250=7.94% 00:25:50.017 cpu : usr=33.35%, sys=0.58%, ctx=889, majf=0, minf=9 00:25:50.017 IO depths : 1=1.5%, 2=3.7%, 4=12.7%, 8=70.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 issued rwts: total=2191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101547: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10044msec) 00:25:50.017 slat (usec): min=4, max=8018, avg=15.94, stdev=169.04 00:25:50.017 clat (msec): min=8, max=122, avg=56.73, stdev=16.25 00:25:50.017 lat (msec): min=8, max=122, avg=56.75, stdev=16.25 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 44], 00:25:50.017 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 56], 60.00th=[ 61], 00:25:50.017 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 79], 95.00th=[ 85], 00:25:50.017 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 123], 99.95th=[ 124], 00:25:50.017 | 99.99th=[ 124] 00:25:50.017 bw ( KiB/s): min= 912, max= 1408, per=4.77%, avg=1123.20, stdev=113.95, samples=20 00:25:50.017 iops : min= 228, max= 352, avg=280.80, stdev=28.49, samples=20 00:25:50.017 lat (msec) : 10=1.70%, 50=35.59%, 100=61.44%, 250=1.27% 00:25:50.017 cpu : usr=42.49%, sys=0.93%, ctx=1551, majf=0, minf=9 00:25:50.017 IO depths : 1=0.3%, 2=0.7%, 4=7.0%, 8=78.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 complete : 0=0.0%, 4=89.2%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 issued rwts: total=2824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101548: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=231, BW=924KiB/s (946kB/s)(9248KiB/10007msec) 00:25:50.017 slat (usec): min=4, max=4024, avg=13.64, stdev=83.90 00:25:50.017 clat (msec): min=13, max=155, avg=69.16, stdev=19.57 00:25:50.017 lat (msec): min=13, max=155, avg=69.17, stdev=19.57 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 55], 00:25:50.017 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 72], 00:25:50.017 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 104], 00:25:50.017 | 99.00th=[ 122], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:25:50.017 | 99.99th=[ 157] 00:25:50.017 bw ( KiB/s): min= 761, max= 1256, per=3.91%, avg=921.63, stdev=152.65, samples=19 00:25:50.017 iops : min= 190, max= 314, avg=230.37, stdev=38.21, samples=19 00:25:50.017 lat (msec) : 20=0.69%, 50=15.44%, 100=78.29%, 250=5.58% 00:25:50.017 cpu : usr=40.17%, sys=0.79%, ctx=1276, majf=0, minf=9 00:25:50.017 IO depths : 1=1.5%, 2=3.6%, 4=13.7%, 8=69.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 issued rwts: total=2312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101549: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=225, BW=902KiB/s (923kB/s)(9032KiB/10016msec) 00:25:50.017 slat (usec): min=4, max=10019, avg=24.06, stdev=319.50 00:25:50.017 clat (msec): min=23, max=156, avg=70.78, stdev=18.87 00:25:50.017 lat (msec): min=23, max=156, avg=70.80, stdev=18.87 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 56], 00:25:50.017 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:25:50.017 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 105], 00:25:50.017 | 99.00th=[ 128], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 00:25:50.017 | 99.99th=[ 157] 00:25:50.017 bw ( KiB/s): min= 680, max= 1408, per=3.84%, avg=903.47, stdev=153.99, samples=19 00:25:50.017 iops : min= 170, max= 352, avg=225.84, stdev=38.50, samples=19 00:25:50.017 lat (msec) : 50=13.60%, 100=80.03%, 250=6.38% 00:25:50.017 cpu : usr=41.98%, sys=0.80%, ctx=1247, majf=0, minf=9 00:25:50.017 IO depths : 1=2.4%, 2=5.8%, 4=15.7%, 8=65.4%, 16=10.8%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 complete : 0=0.0%, 4=91.6%, 8=3.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101550: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=224, BW=898KiB/s (919kB/s)(8980KiB/10001msec) 00:25:50.017 slat (usec): min=4, max=6405, avg=15.08, stdev=135.08 00:25:50.017 clat (msec): min=35, max=143, avg=71.14, stdev=16.58 00:25:50.017 lat (msec): min=35, max=143, avg=71.16, stdev=16.58 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:25:50.017 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 69], 60.00th=[ 72], 00:25:50.017 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 106], 00:25:50.017 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 144], 00:25:50.017 | 99.99th=[ 144] 00:25:50.017 bw ( KiB/s): min= 768, max= 1024, per=3.82%, avg=900.53, stdev=90.16, samples=19 00:25:50.017 iops : min= 192, max= 256, avg=225.11, stdev=22.55, samples=19 00:25:50.017 lat (msec) : 50=8.20%, 100=85.26%, 250=6.55% 00:25:50.017 cpu : usr=39.82%, sys=0.73%, ctx=1148, majf=0, minf=9 00:25:50.017 IO depths : 1=2.9%, 2=6.5%, 4=16.4%, 8=63.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 complete : 0=0.0%, 4=91.9%, 8=3.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.017 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.017 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=101551: Mon Jul 15 02:27:47 2024 00:25:50.017 read: IOPS=227, BW=908KiB/s (930kB/s)(9088KiB/10005msec) 00:25:50.017 slat (usec): min=4, max=4022, avg=13.92, stdev=84.39 00:25:50.017 clat (msec): min=12, max=130, avg=70.35, stdev=17.42 00:25:50.017 lat (msec): min=12, max=130, avg=70.37, stdev=17.42 00:25:50.017 clat percentiles (msec): 00:25:50.017 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:25:50.017 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:25:50.017 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 106], 00:25:50.017 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:25:50.017 | 99.99th=[ 131] 00:25:50.017 bw ( KiB/s): min= 680, max= 1072, per=3.85%, avg=905.47, stdev=99.05, samples=19 00:25:50.017 iops : min= 170, max= 268, avg=226.32, stdev=24.79, samples=19 00:25:50.017 lat (msec) : 20=0.26%, 50=11.58%, 100=82.44%, 250=5.72% 00:25:50.017 cpu : usr=38.43%, sys=0.73%, ctx=1094, majf=0, minf=9 00:25:50.017 IO depths : 1=1.7%, 2=4.1%, 4=12.6%, 8=69.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=91.0%, 8=4.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename0: (groupid=0, jobs=1): err= 0: pid=101552: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=224, BW=900KiB/s (921kB/s)(8996KiB/10001msec) 00:25:50.018 slat (usec): min=4, max=8024, avg=21.27, stdev=253.46 00:25:50.018 clat (msec): min=2, max=148, avg=70.98, stdev=19.52 00:25:50.018 lat (msec): min=2, max=148, avg=71.00, stdev=19.52 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 5], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:25:50.018 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:25:50.018 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 106], 00:25:50.018 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 148], 99.95th=[ 148], 00:25:50.018 | 99.99th=[ 148] 00:25:50.018 bw ( KiB/s): min= 697, max= 1104, per=3.76%, avg=885.53, stdev=111.24, samples=19 00:25:50.018 iops : min= 174, max= 276, avg=221.37, stdev=27.83, samples=19 00:25:50.018 lat (msec) : 4=0.71%, 10=0.71%, 20=0.09%, 50=10.98%, 100=81.59% 00:25:50.018 lat (msec) : 250=5.91% 00:25:50.018 cpu : usr=37.09%, sys=0.70%, ctx=1066, majf=0, minf=9 00:25:50.018 IO depths : 1=2.2%, 2=5.2%, 4=14.6%, 8=67.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename0: (groupid=0, jobs=1): err= 0: pid=101553: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=250, BW=1003KiB/s (1028kB/s)(9.82MiB/10021msec) 00:25:50.018 slat (usec): min=4, max=8007, avg=16.41, stdev=178.30 00:25:50.018 clat (msec): min=23, max=147, avg=63.61, stdev=20.62 00:25:50.018 lat (msec): min=23, max=147, avg=63.63, stdev=20.61 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 45], 00:25:50.018 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:25:50.018 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 97], 00:25:50.018 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:25:50.018 | 99.99th=[ 148] 00:25:50.018 bw ( KiB/s): min= 768, max= 1456, per=4.32%, avg=1017.89, stdev=192.06, samples=19 00:25:50.018 iops : min= 192, max= 364, avg=254.42, stdev=48.06, samples=19 00:25:50.018 lat (msec) : 50=32.18%, 100=64.44%, 250=3.38% 00:25:50.018 cpu : usr=41.20%, sys=0.81%, ctx=1308, majf=0, minf=9 00:25:50.018 IO depths : 1=0.8%, 2=1.7%, 4=8.6%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101554: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=217, BW=868KiB/s (889kB/s)(8696KiB/10015msec) 00:25:50.018 slat (usec): min=4, max=4020, avg=14.06, stdev=86.20 00:25:50.018 clat (msec): min=18, max=165, avg=73.60, stdev=19.24 00:25:50.018 lat (msec): min=18, max=165, avg=73.61, stdev=19.24 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 60], 00:25:50.018 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:25:50.018 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 110], 00:25:50.018 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 165], 99.95th=[ 165], 00:25:50.018 | 99.99th=[ 165] 00:25:50.018 bw ( KiB/s): min= 640, max= 1024, per=3.69%, avg=868.11, stdev=102.12, samples=19 00:25:50.018 iops : min= 160, max= 256, avg=217.00, stdev=25.52, samples=19 00:25:50.018 lat (msec) : 20=0.23%, 50=8.92%, 100=83.21%, 250=7.64% 00:25:50.018 cpu : usr=35.60%, sys=0.63%, ctx=966, majf=0, minf=9 00:25:50.018 IO depths : 1=2.7%, 2=6.0%, 4=16.6%, 8=64.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=91.6%, 8=2.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101555: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=251, BW=1007KiB/s (1032kB/s)(9.87MiB/10030msec) 00:25:50.018 slat (usec): min=6, max=4051, avg=14.68, stdev=103.39 00:25:50.018 clat (msec): min=28, max=126, avg=63.38, stdev=18.57 00:25:50.018 lat (msec): min=28, max=126, avg=63.39, stdev=18.57 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 00:25:50.018 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:25:50.018 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 96], 00:25:50.018 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:25:50.018 | 99.99th=[ 127] 00:25:50.018 bw ( KiB/s): min= 814, max= 1200, per=4.28%, avg=1007.35, stdev=132.70, samples=20 00:25:50.018 iops : min= 203, max= 300, avg=251.80, stdev=33.21, samples=20 00:25:50.018 lat (msec) : 50=29.65%, 100=66.98%, 250=3.37% 00:25:50.018 cpu : usr=37.67%, sys=0.77%, ctx=1064, majf=0, minf=9 00:25:50.018 IO depths : 1=0.9%, 2=2.3%, 4=9.1%, 8=75.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101556: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=263, BW=1056KiB/s (1081kB/s)(10.3MiB/10023msec) 00:25:50.018 slat (usec): min=4, max=3036, avg=12.53, stdev=59.08 00:25:50.018 clat (msec): min=9, max=118, avg=60.48, stdev=17.01 00:25:50.018 lat (msec): min=9, max=118, avg=60.49, stdev=17.01 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 46], 00:25:50.018 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:25:50.018 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 91], 00:25:50.018 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 120], 00:25:50.018 | 99.99th=[ 120] 00:25:50.018 bw ( KiB/s): min= 808, max= 1296, per=4.48%, avg=1055.30, stdev=132.57, samples=20 00:25:50.018 iops : min= 202, max= 324, avg=263.80, stdev=33.12, samples=20 00:25:50.018 lat (msec) : 10=0.60%, 20=0.60%, 50=29.34%, 100=67.52%, 250=1.93% 00:25:50.018 cpu : usr=40.90%, sys=0.81%, ctx=1177, majf=0, minf=9 00:25:50.018 IO depths : 1=1.0%, 2=2.5%, 4=9.5%, 8=74.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101557: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=265, BW=1061KiB/s (1087kB/s)(10.4MiB/10034msec) 00:25:50.018 slat (usec): min=3, max=7143, avg=15.99, stdev=158.51 00:25:50.018 clat (msec): min=5, max=142, avg=60.15, stdev=19.56 00:25:50.018 lat (msec): min=5, max=142, avg=60.16, stdev=19.56 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:25:50.018 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 63], 00:25:50.018 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 95], 00:25:50.018 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 144], 99.95th=[ 144], 00:25:50.018 | 99.99th=[ 144] 00:25:50.018 bw ( KiB/s): min= 816, max= 1833, per=4.51%, avg=1061.25, stdev=211.86, samples=20 00:25:50.018 iops : min= 204, max= 458, avg=265.30, stdev=52.92, samples=20 00:25:50.018 lat (msec) : 10=1.80%, 20=1.20%, 50=32.79%, 100=61.61%, 250=2.59% 00:25:50.018 cpu : usr=35.52%, sys=0.69%, ctx=978, majf=0, minf=9 00:25:50.018 IO depths : 1=0.7%, 2=1.6%, 4=8.4%, 8=76.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101558: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=228, BW=915KiB/s (937kB/s)(9168KiB/10022msec) 00:25:50.018 slat (usec): min=4, max=8128, avg=20.87, stdev=259.90 00:25:50.018 clat (msec): min=32, max=133, avg=69.80, stdev=17.69 00:25:50.018 lat (msec): min=32, max=133, avg=69.82, stdev=17.71 00:25:50.018 clat percentiles (msec): 00:25:50.018 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:25:50.018 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:25:50.018 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 105], 00:25:50.018 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 134], 99.95th=[ 134], 00:25:50.018 | 99.99th=[ 134] 00:25:50.018 bw ( KiB/s): min= 768, max= 1072, per=3.87%, avg=911.00, stdev=99.89, samples=19 00:25:50.018 iops : min= 192, max= 268, avg=227.68, stdev=24.95, samples=19 00:25:50.018 lat (msec) : 50=12.17%, 100=81.54%, 250=6.28% 00:25:50.018 cpu : usr=33.97%, sys=0.71%, ctx=1004, majf=0, minf=9 00:25:50.018 IO depths : 1=1.6%, 2=4.0%, 4=13.0%, 8=69.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:50.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.018 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.018 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.018 filename1: (groupid=0, jobs=1): err= 0: pid=101559: Mon Jul 15 02:27:47 2024 00:25:50.018 read: IOPS=259, BW=1040KiB/s (1065kB/s)(10.2MiB/10021msec) 00:25:50.018 slat (usec): min=6, max=8032, avg=22.95, stdev=260.08 00:25:50.018 clat (msec): min=29, max=131, avg=61.39, stdev=17.75 00:25:50.019 lat (msec): min=29, max=131, avg=61.41, stdev=17.76 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 46], 00:25:50.019 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 64], 00:25:50.019 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 95], 00:25:50.019 | 99.00th=[ 108], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 132], 00:25:50.019 | 99.99th=[ 132] 00:25:50.019 bw ( KiB/s): min= 768, max= 1376, per=4.39%, avg=1034.95, stdev=160.09, samples=20 00:25:50.019 iops : min= 192, max= 344, avg=258.70, stdev=40.04, samples=20 00:25:50.019 lat (msec) : 50=32.55%, 100=64.49%, 250=2.96% 00:25:50.019 cpu : usr=42.31%, sys=0.74%, ctx=1221, majf=0, minf=9 00:25:50.019 IO depths : 1=1.3%, 2=2.9%, 4=9.4%, 8=74.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename1: (groupid=0, jobs=1): err= 0: pid=101560: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=235, BW=943KiB/s (966kB/s)(9456KiB/10027msec) 00:25:50.019 slat (usec): min=4, max=3237, avg=13.73, stdev=66.63 00:25:50.019 clat (msec): min=26, max=143, avg=67.75, stdev=18.56 00:25:50.019 lat (msec): min=26, max=143, avg=67.77, stdev=18.55 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 54], 00:25:50.019 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 70], 00:25:50.019 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 102], 00:25:50.019 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:25:50.019 | 99.99th=[ 144] 00:25:50.019 bw ( KiB/s): min= 640, max= 1229, per=3.96%, avg=931.95, stdev=143.17, samples=19 00:25:50.019 iops : min= 160, max= 307, avg=232.95, stdev=35.77, samples=19 00:25:50.019 lat (msec) : 50=15.99%, 100=78.76%, 250=5.25% 00:25:50.019 cpu : usr=42.06%, sys=0.86%, ctx=1265, majf=0, minf=9 00:25:50.019 IO depths : 1=1.4%, 2=3.2%, 4=10.2%, 8=72.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=90.4%, 8=5.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename1: (groupid=0, jobs=1): err= 0: pid=101561: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=248, BW=993KiB/s (1016kB/s)(9936KiB/10011msec) 00:25:50.019 slat (usec): min=5, max=8021, avg=21.57, stdev=278.24 00:25:50.019 clat (msec): min=28, max=129, avg=64.33, stdev=17.59 00:25:50.019 lat (msec): min=28, max=129, avg=64.35, stdev=17.59 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:25:50.019 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 69], 00:25:50.019 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 96], 00:25:50.019 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:25:50.019 | 99.99th=[ 130] 00:25:50.019 bw ( KiB/s): min= 768, max= 1280, per=4.22%, avg=993.58, stdev=139.88, samples=19 00:25:50.019 iops : min= 192, max= 320, avg=248.37, stdev=35.01, samples=19 00:25:50.019 lat (msec) : 50=25.72%, 100=71.18%, 250=3.10% 00:25:50.019 cpu : usr=33.19%, sys=0.79%, ctx=882, majf=0, minf=9 00:25:50.019 IO depths : 1=1.0%, 2=2.3%, 4=9.4%, 8=74.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename2: (groupid=0, jobs=1): err= 0: pid=101562: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=239, BW=960KiB/s (983kB/s)(9616KiB/10019msec) 00:25:50.019 slat (nsec): min=4809, max=89508, avg=11689.02, stdev=6708.44 00:25:50.019 clat (msec): min=24, max=143, avg=66.60, stdev=18.98 00:25:50.019 lat (msec): min=24, max=143, avg=66.61, stdev=18.98 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 48], 00:25:50.019 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:25:50.019 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 103], 00:25:50.019 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:25:50.019 | 99.99th=[ 144] 00:25:50.019 bw ( KiB/s): min= 768, max= 1328, per=4.09%, avg=963.26, stdev=132.52, samples=19 00:25:50.019 iops : min= 192, max= 332, avg=240.79, stdev=33.14, samples=19 00:25:50.019 lat (msec) : 50=23.38%, 100=71.30%, 250=5.32% 00:25:50.019 cpu : usr=34.88%, sys=0.62%, ctx=968, majf=0, minf=9 00:25:50.019 IO depths : 1=1.1%, 2=2.5%, 4=9.6%, 8=74.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename2: (groupid=0, jobs=1): err= 0: pid=101563: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=235, BW=943KiB/s (966kB/s)(9464KiB/10031msec) 00:25:50.019 slat (usec): min=4, max=12010, avg=23.37, stdev=321.18 00:25:50.019 clat (msec): min=32, max=137, avg=67.71, stdev=18.97 00:25:50.019 lat (msec): min=32, max=137, avg=67.74, stdev=18.97 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:25:50.019 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:25:50.019 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 100], 00:25:50.019 | 99.00th=[ 113], 99.50th=[ 123], 99.90th=[ 138], 99.95th=[ 138], 00:25:50.019 | 99.99th=[ 138] 00:25:50.019 bw ( KiB/s): min= 768, max= 1200, per=3.99%, avg=939.80, stdev=126.26, samples=20 00:25:50.019 iops : min= 192, max= 300, avg=234.90, stdev=31.59, samples=20 00:25:50.019 lat (msec) : 50=25.40%, 100=69.61%, 250=4.99% 00:25:50.019 cpu : usr=33.21%, sys=0.75%, ctx=890, majf=0, minf=9 00:25:50.019 IO depths : 1=1.1%, 2=2.2%, 4=9.8%, 8=74.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename2: (groupid=0, jobs=1): err= 0: pid=101564: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.85MiB/10024msec) 00:25:50.019 slat (nsec): min=6695, max=62966, avg=11909.85, stdev=6324.35 00:25:50.019 clat (msec): min=26, max=143, avg=63.51, stdev=18.81 00:25:50.019 lat (msec): min=26, max=143, avg=63.53, stdev=18.81 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:25:50.019 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 67], 00:25:50.019 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 100], 00:25:50.019 | 99.00th=[ 128], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:25:50.019 | 99.99th=[ 144] 00:25:50.019 bw ( KiB/s): min= 768, max= 1472, per=4.26%, avg=1002.32, stdev=192.14, samples=19 00:25:50.019 iops : min= 192, max= 368, avg=250.53, stdev=48.07, samples=19 00:25:50.019 lat (msec) : 50=25.82%, 100=69.85%, 250=4.32% 00:25:50.019 cpu : usr=36.72%, sys=0.75%, ctx=1225, majf=0, minf=9 00:25:50.019 IO depths : 1=1.5%, 2=3.6%, 4=11.2%, 8=71.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename2: (groupid=0, jobs=1): err= 0: pid=101565: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=248, BW=995KiB/s (1019kB/s)(9980KiB/10030msec) 00:25:50.019 slat (usec): min=4, max=8021, avg=17.90, stdev=226.78 00:25:50.019 clat (msec): min=21, max=153, avg=64.18, stdev=19.25 00:25:50.019 lat (msec): min=21, max=153, avg=64.20, stdev=19.24 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:25:50.019 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 70], 00:25:50.019 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 99], 00:25:50.019 | 99.00th=[ 121], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 140], 00:25:50.019 | 99.99th=[ 155] 00:25:50.019 bw ( KiB/s): min= 768, max= 1344, per=4.21%, avg=990.90, stdev=177.70, samples=20 00:25:50.019 iops : min= 192, max= 336, avg=247.65, stdev=44.42, samples=20 00:25:50.019 lat (msec) : 50=29.14%, 100=66.09%, 250=4.77% 00:25:50.019 cpu : usr=38.39%, sys=0.70%, ctx=1002, majf=0, minf=9 00:25:50.019 IO depths : 1=0.6%, 2=1.4%, 4=8.5%, 8=76.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:25:50.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 complete : 0=0.0%, 4=89.4%, 8=6.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.019 issued rwts: total=2495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.019 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.019 filename2: (groupid=0, jobs=1): err= 0: pid=101566: Mon Jul 15 02:27:47 2024 00:25:50.019 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10018msec) 00:25:50.019 slat (usec): min=4, max=2082, avg=12.43, stdev=39.80 00:25:50.019 clat (msec): min=21, max=129, avg=57.92, stdev=19.13 00:25:50.019 lat (msec): min=21, max=129, avg=57.93, stdev=19.13 00:25:50.019 clat percentiles (msec): 00:25:50.019 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 41], 00:25:50.019 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:25:50.019 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 93], 00:25:50.019 | 99.00th=[ 116], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 130], 00:25:50.019 | 99.99th=[ 130] 00:25:50.019 bw ( KiB/s): min= 688, max= 1472, per=4.67%, avg=1098.85, stdev=195.62, samples=20 00:25:50.019 iops : min= 172, max= 368, avg=274.70, stdev=48.90, samples=20 00:25:50.019 lat (msec) : 50=44.86%, 100=52.21%, 250=2.93% 00:25:50.020 cpu : usr=45.22%, sys=0.93%, ctx=1339, majf=0, minf=9 00:25:50.020 IO depths : 1=1.3%, 2=2.6%, 4=9.7%, 8=74.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:25:50.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.020 filename2: (groupid=0, jobs=1): err= 0: pid=101567: Mon Jul 15 02:27:47 2024 00:25:50.020 read: IOPS=245, BW=983KiB/s (1007kB/s)(9852KiB/10020msec) 00:25:50.020 slat (usec): min=4, max=8024, avg=23.98, stdev=290.87 00:25:50.020 clat (msec): min=15, max=118, avg=64.88, stdev=16.68 00:25:50.020 lat (msec): min=15, max=118, avg=64.91, stdev=16.67 00:25:50.020 clat percentiles (msec): 00:25:50.020 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 49], 00:25:50.020 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:25:50.020 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 96], 00:25:50.020 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 120], 99.95th=[ 120], 00:25:50.020 | 99.99th=[ 120] 00:25:50.020 bw ( KiB/s): min= 768, max= 1112, per=4.16%, avg=978.10, stdev=106.98, samples=20 00:25:50.020 iops : min= 192, max= 278, avg=244.50, stdev=26.72, samples=20 00:25:50.020 lat (msec) : 20=0.65%, 50=22.05%, 100=74.02%, 250=3.29% 00:25:50.020 cpu : usr=33.80%, sys=0.79%, ctx=901, majf=0, minf=9 00:25:50.020 IO depths : 1=1.3%, 2=3.0%, 4=11.7%, 8=71.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:50.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.020 filename2: (groupid=0, jobs=1): err= 0: pid=101568: Mon Jul 15 02:27:47 2024 00:25:50.020 read: IOPS=279, BW=1120KiB/s (1147kB/s)(11.0MiB/10030msec) 00:25:50.020 slat (usec): min=4, max=3967, avg=12.82, stdev=74.86 00:25:50.020 clat (msec): min=6, max=111, avg=57.07, stdev=17.78 00:25:50.020 lat (msec): min=6, max=112, avg=57.09, stdev=17.78 00:25:50.020 clat percentiles (msec): 00:25:50.020 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 44], 00:25:50.020 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 60], 00:25:50.020 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 88], 00:25:50.020 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 112], 99.95th=[ 112], 00:25:50.020 | 99.99th=[ 112] 00:25:50.020 bw ( KiB/s): min= 880, max= 1584, per=4.74%, avg=1116.40, stdev=173.97, samples=20 00:25:50.020 iops : min= 220, max= 396, avg=279.10, stdev=43.49, samples=20 00:25:50.020 lat (msec) : 10=2.28%, 50=39.42%, 100=56.16%, 250=2.14% 00:25:50.020 cpu : usr=41.36%, sys=0.85%, ctx=1349, majf=0, minf=9 00:25:50.020 IO depths : 1=1.2%, 2=3.0%, 4=11.2%, 8=72.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:25:50.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.020 filename2: (groupid=0, jobs=1): err= 0: pid=101569: Mon Jul 15 02:27:47 2024 00:25:50.020 read: IOPS=263, BW=1055KiB/s (1080kB/s)(10.3MiB/10043msec) 00:25:50.020 slat (usec): min=4, max=8028, avg=14.92, stdev=155.87 00:25:50.020 clat (msec): min=12, max=118, avg=60.54, stdev=17.60 00:25:50.020 lat (msec): min=12, max=118, avg=60.56, stdev=17.60 00:25:50.020 clat percentiles (msec): 00:25:50.020 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 47], 00:25:50.020 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:25:50.020 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 91], 00:25:50.020 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 118], 99.95th=[ 118], 00:25:50.020 | 99.99th=[ 118] 00:25:50.020 bw ( KiB/s): min= 768, max= 1352, per=4.47%, avg=1052.75, stdev=162.39, samples=20 00:25:50.020 iops : min= 192, max= 338, avg=263.15, stdev=40.58, samples=20 00:25:50.020 lat (msec) : 20=1.21%, 50=35.03%, 100=61.87%, 250=1.89% 00:25:50.020 cpu : usr=33.61%, sys=0.50%, ctx=891, majf=0, minf=9 00:25:50.020 IO depths : 1=0.5%, 2=1.2%, 4=6.6%, 8=78.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:25:50.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.020 issued rwts: total=2649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.020 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:50.020 00:25:50.020 Run status group 0 (all jobs): 00:25:50.020 READ: bw=23.0MiB/s (24.1MB/s), 868KiB/s-1125KiB/s (889kB/s-1152kB/s), io=231MiB (242MB), run=10001-10044msec 00:25:50.020 02:27:47 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:50.020 02:27:47 -- target/dif.sh@43 -- # local sub 00:25:50.020 02:27:47 -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.020 02:27:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:50.020 02:27:47 -- target/dif.sh@36 -- # local sub_id=0 00:25:50.020 02:27:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.020 02:27:47 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:50.020 02:27:47 -- target/dif.sh@36 -- # local sub_id=1 00:25:50.020 02:27:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@45 -- # for sub in "$@" 00:25:50.020 02:27:47 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:50.020 02:27:47 -- target/dif.sh@36 -- # local sub_id=2 00:25:50.020 02:27:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # numjobs=2 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # iodepth=8 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # runtime=5 00:25:50.020 02:27:47 -- target/dif.sh@115 -- # files=1 00:25:50.020 02:27:47 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:50.020 02:27:47 -- target/dif.sh@28 -- # local sub 00:25:50.020 02:27:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.020 02:27:47 -- target/dif.sh@31 -- # create_subsystem 0 00:25:50.020 02:27:47 -- target/dif.sh@18 -- # local sub_id=0 00:25:50.020 02:27:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 bdev_null0 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 [2024-07-15 02:27:47.892303] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@30 -- # for sub in "$@" 00:25:50.020 02:27:47 -- target/dif.sh@31 -- # create_subsystem 1 00:25:50.020 02:27:47 -- target/dif.sh@18 -- # local sub_id=1 00:25:50.020 02:27:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 bdev_null1 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.020 02:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.020 02:27:47 -- common/autotest_common.sh@10 -- # set +x 00:25:50.020 02:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.020 02:27:47 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:50.020 02:27:47 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:50.020 02:27:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:50.020 02:27:47 -- nvmf/common.sh@520 -- # config=() 00:25:50.020 02:27:47 -- nvmf/common.sh@520 -- # local subsystem config 00:25:50.020 02:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.020 02:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.020 { 00:25:50.020 "params": { 00:25:50.020 "name": "Nvme$subsystem", 00:25:50.020 "trtype": "$TEST_TRANSPORT", 00:25:50.020 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.020 "adrfam": "ipv4", 00:25:50.020 "trsvcid": "$NVMF_PORT", 00:25:50.020 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.020 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.021 "hdgst": ${hdgst:-false}, 00:25:50.021 "ddgst": ${ddgst:-false} 00:25:50.021 }, 00:25:50.021 "method": "bdev_nvme_attach_controller" 00:25:50.021 } 00:25:50.021 EOF 00:25:50.021 )") 00:25:50.021 02:27:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.021 02:27:47 -- target/dif.sh@82 -- # gen_fio_conf 00:25:50.021 02:27:47 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.021 02:27:47 -- target/dif.sh@54 -- # local file 00:25:50.021 02:27:47 -- target/dif.sh@56 -- # cat 00:25:50.021 02:27:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:50.021 02:27:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:50.021 02:27:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:50.021 02:27:47 -- nvmf/common.sh@542 -- # cat 00:25:50.021 02:27:47 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.021 02:27:47 -- common/autotest_common.sh@1320 -- # shift 00:25:50.021 02:27:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:50.021 02:27:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:50.021 02:27:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:50.021 02:27:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.021 02:27:47 -- target/dif.sh@73 -- # cat 00:25:50.021 02:27:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:50.021 02:27:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:50.021 { 00:25:50.021 "params": { 00:25:50.021 "name": "Nvme$subsystem", 00:25:50.021 "trtype": "$TEST_TRANSPORT", 00:25:50.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:50.021 "adrfam": "ipv4", 00:25:50.021 "trsvcid": "$NVMF_PORT", 00:25:50.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:50.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:50.021 "hdgst": ${hdgst:-false}, 00:25:50.021 "ddgst": ${ddgst:-false} 00:25:50.021 }, 00:25:50.021 "method": "bdev_nvme_attach_controller" 00:25:50.021 } 00:25:50.021 EOF 00:25:50.021 )") 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:50.021 02:27:47 -- nvmf/common.sh@542 -- # cat 00:25:50.021 02:27:47 -- target/dif.sh@72 -- # (( file++ )) 00:25:50.021 02:27:47 -- target/dif.sh@72 -- # (( file <= files )) 00:25:50.021 02:27:47 -- nvmf/common.sh@544 -- # jq . 00:25:50.021 02:27:47 -- nvmf/common.sh@545 -- # IFS=, 00:25:50.021 02:27:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:50.021 "params": { 00:25:50.021 "name": "Nvme0", 00:25:50.021 "trtype": "tcp", 00:25:50.021 "traddr": "10.0.0.2", 00:25:50.021 "adrfam": "ipv4", 00:25:50.021 "trsvcid": "4420", 00:25:50.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:50.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:50.021 "hdgst": false, 00:25:50.021 "ddgst": false 00:25:50.021 }, 00:25:50.021 "method": "bdev_nvme_attach_controller" 00:25:50.021 },{ 00:25:50.021 "params": { 00:25:50.021 "name": "Nvme1", 00:25:50.021 "trtype": "tcp", 00:25:50.021 "traddr": "10.0.0.2", 00:25:50.021 "adrfam": "ipv4", 00:25:50.021 "trsvcid": "4420", 00:25:50.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:50.021 "hdgst": false, 00:25:50.021 "ddgst": false 00:25:50.021 }, 00:25:50.021 "method": "bdev_nvme_attach_controller" 00:25:50.021 }' 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:50.021 02:27:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:50.021 02:27:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:50.021 02:27:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:50.021 02:27:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:50.021 02:27:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:50.021 02:27:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:50.021 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:50.021 ... 00:25:50.021 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:50.021 ... 00:25:50.021 fio-3.35 00:25:50.021 Starting 4 threads 00:25:50.021 [2024-07-15 02:27:48.633233] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:50.021 [2024-07-15 02:27:48.633319] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:54.275 00:25:54.275 filename0: (groupid=0, jobs=1): err= 0: pid=101696: Mon Jul 15 02:27:53 2024 00:25:54.275 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5001msec) 00:25:54.275 slat (usec): min=5, max=106, avg=19.97, stdev=12.47 00:25:54.275 clat (usec): min=937, max=9515, avg=3732.09, stdev=494.77 00:25:54.275 lat (usec): min=944, max=9522, avg=3752.06, stdev=495.21 00:25:54.275 clat percentiles (usec): 00:25:54.275 | 1.00th=[ 3097], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:25:54.275 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3720], 00:25:54.275 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4113], 95.00th=[ 4490], 00:25:54.275 | 99.00th=[ 5473], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 8979], 00:25:54.275 | 99.99th=[ 9110] 00:25:54.275 bw ( KiB/s): min=15360, max=18048, per=25.04%, avg=16721.78, stdev=955.57, samples=9 00:25:54.275 iops : min= 1920, max= 2256, avg=2090.22, stdev=119.45, samples=9 00:25:54.275 lat (usec) : 1000=0.08% 00:25:54.275 lat (msec) : 2=0.14%, 4=83.17%, 10=16.61% 00:25:54.275 cpu : usr=95.44%, sys=3.26%, ctx=6, majf=0, minf=0 00:25:54.275 IO depths : 1=10.2%, 2=22.7%, 4=52.2%, 8=14.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 issued rwts: total=10430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:54.275 filename0: (groupid=0, jobs=1): err= 0: pid=101697: Mon Jul 15 02:27:53 2024 00:25:54.275 read: IOPS=2095, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5004msec) 00:25:54.275 slat (nsec): min=6142, max=83249, avg=11250.16, stdev=7783.93 00:25:54.275 clat (usec): min=1254, max=9784, avg=3761.62, stdev=445.33 00:25:54.275 lat (usec): min=1268, max=9791, avg=3772.87, stdev=445.47 00:25:54.275 clat percentiles (usec): 00:25:54.275 | 1.00th=[ 3195], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:25:54.275 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:25:54.275 | 70.00th=[ 3851], 80.00th=[ 4015], 90.00th=[ 4113], 95.00th=[ 4359], 00:25:54.275 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 7963], 00:25:54.275 | 99.99th=[ 8029] 00:25:54.275 bw ( KiB/s): min=15872, max=17920, per=25.21%, avg=16835.56, stdev=808.98, samples=9 00:25:54.275 iops : min= 1984, max= 2240, avg=2104.44, stdev=101.12, samples=9 00:25:54.275 lat (msec) : 2=0.27%, 4=78.64%, 10=21.09% 00:25:54.275 cpu : usr=95.54%, sys=3.26%, ctx=6, majf=0, minf=0 00:25:54.275 IO depths : 1=10.5%, 2=24.2%, 4=50.8%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 issued rwts: total=10486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:54.275 filename1: (groupid=0, jobs=1): err= 0: pid=101698: Mon Jul 15 02:27:53 2024 00:25:54.275 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:25:54.275 slat (usec): min=5, max=105, avg=19.71, stdev=12.38 00:25:54.275 clat (usec): min=688, max=10623, avg=3752.21, stdev=482.52 00:25:54.275 lat (usec): min=698, max=10634, avg=3771.92, stdev=482.68 00:25:54.275 clat percentiles (usec): 00:25:54.275 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3458], 00:25:54.275 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:25:54.275 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4113], 95.00th=[ 4424], 00:25:54.275 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 8717], 99.95th=[ 8979], 00:25:54.275 | 99.99th=[10683] 00:25:54.275 bw ( KiB/s): min=15360, max=18032, per=24.98%, avg=16682.67, stdev=938.73, samples=9 00:25:54.275 iops : min= 1920, max= 2254, avg=2085.33, stdev=117.34, samples=9 00:25:54.275 lat (usec) : 750=0.01% 00:25:54.275 lat (msec) : 4=82.92%, 10=17.04%, 20=0.03% 00:25:54.275 cpu : usr=95.36%, sys=3.42%, ctx=5, majf=0, minf=9 00:25:54.275 IO depths : 1=7.9%, 2=19.1%, 4=55.8%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 issued rwts: total=10400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:54.275 filename1: (groupid=0, jobs=1): err= 0: pid=101699: Mon Jul 15 02:27:53 2024 00:25:54.275 read: IOPS=2089, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5002msec) 00:25:54.275 slat (usec): min=6, max=848, avg=14.13, stdev=12.21 00:25:54.275 clat (usec): min=1512, max=7899, avg=3767.72, stdev=437.25 00:25:54.275 lat (usec): min=1519, max=7905, avg=3781.85, stdev=437.04 00:25:54.275 clat percentiles (usec): 00:25:54.275 | 1.00th=[ 3163], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3490], 00:25:54.275 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3687], 60.00th=[ 3752], 00:25:54.275 | 70.00th=[ 3851], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4424], 00:25:54.275 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 6980], 99.95th=[ 7046], 00:25:54.275 | 99.99th=[ 7111] 00:25:54.275 bw ( KiB/s): min=15616, max=18048, per=25.12%, avg=16773.33, stdev=906.40, samples=9 00:25:54.275 iops : min= 1952, max= 2256, avg=2096.67, stdev=113.30, samples=9 00:25:54.275 lat (msec) : 2=0.03%, 4=79.83%, 10=20.14% 00:25:54.275 cpu : usr=95.24%, sys=3.54%, ctx=14, majf=0, minf=0 00:25:54.275 IO depths : 1=10.0%, 2=22.0%, 4=53.0%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:54.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.275 issued rwts: total=10451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:54.275 00:25:54.275 Run status group 0 (all jobs): 00:25:54.275 READ: bw=65.2MiB/s (68.4MB/s), 16.2MiB/s-16.4MiB/s (17.0MB/s-17.2MB/s), io=326MiB (342MB), run=5001-5004msec 00:25:54.533 02:27:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:54.533 02:27:53 -- target/dif.sh@43 -- # local sub 00:25:54.533 02:27:53 -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.533 02:27:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:54.533 02:27:53 -- target/dif.sh@36 -- # local sub_id=0 00:25:54.533 02:27:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:54.533 02:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:54.533 02:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:53 -- target/dif.sh@45 -- # for sub in "$@" 00:25:54.533 02:27:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:54.533 02:27:53 -- target/dif.sh@36 -- # local sub_id=1 00:25:54.533 02:27:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.533 02:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:54.533 02:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 00:25:54.533 real 0m23.636s 00:25:54.533 user 2m6.885s 00:25:54.533 sys 0m4.266s 00:25:54.533 ************************************ 00:25:54.533 END TEST fio_dif_rand_params 00:25:54.533 ************************************ 00:25:54.533 02:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.533 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:54 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:54.533 02:27:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:54.533 02:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.533 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 ************************************ 00:25:54.533 START TEST fio_dif_digest 00:25:54.533 ************************************ 00:25:54.533 02:27:54 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:25:54.533 02:27:54 -- target/dif.sh@123 -- # local NULL_DIF 00:25:54.533 02:27:54 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:54.533 02:27:54 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:54.533 02:27:54 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:54.533 02:27:54 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:54.533 02:27:54 -- target/dif.sh@127 -- # numjobs=3 00:25:54.533 02:27:54 -- target/dif.sh@127 -- # iodepth=3 00:25:54.533 02:27:54 -- target/dif.sh@127 -- # runtime=10 00:25:54.533 02:27:54 -- target/dif.sh@128 -- # hdgst=true 00:25:54.533 02:27:54 -- target/dif.sh@128 -- # ddgst=true 00:25:54.533 02:27:54 -- target/dif.sh@130 -- # create_subsystems 0 00:25:54.533 02:27:54 -- target/dif.sh@28 -- # local sub 00:25:54.533 02:27:54 -- target/dif.sh@30 -- # for sub in "$@" 00:25:54.533 02:27:54 -- target/dif.sh@31 -- # create_subsystem 0 00:25:54.533 02:27:54 -- target/dif.sh@18 -- # local sub_id=0 00:25:54.533 02:27:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:54.533 02:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 bdev_null0 00:25:54.533 02:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:54.533 02:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:54.533 02:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.533 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.533 02:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.533 02:27:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.792 02:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.792 02:27:54 -- common/autotest_common.sh@10 -- # set +x 00:25:54.792 [2024-07-15 02:27:54.093364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.792 02:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.792 02:27:54 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:54.792 02:27:54 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:54.792 02:27:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:54.792 02:27:54 -- nvmf/common.sh@520 -- # config=() 00:25:54.792 02:27:54 -- nvmf/common.sh@520 -- # local subsystem config 00:25:54.792 02:27:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:54.792 02:27:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:54.792 { 00:25:54.792 "params": { 00:25:54.792 "name": "Nvme$subsystem", 00:25:54.792 "trtype": "$TEST_TRANSPORT", 00:25:54.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:54.792 "adrfam": "ipv4", 00:25:54.792 "trsvcid": "$NVMF_PORT", 00:25:54.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:54.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:54.792 "hdgst": ${hdgst:-false}, 00:25:54.792 "ddgst": ${ddgst:-false} 00:25:54.792 }, 00:25:54.792 "method": "bdev_nvme_attach_controller" 00:25:54.792 } 00:25:54.792 EOF 00:25:54.792 )") 00:25:54.792 02:27:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.792 02:27:54 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.792 02:27:54 -- target/dif.sh@82 -- # gen_fio_conf 00:25:54.792 02:27:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:54.792 02:27:54 -- target/dif.sh@54 -- # local file 00:25:54.792 02:27:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:54.792 02:27:54 -- target/dif.sh@56 -- # cat 00:25:54.792 02:27:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:54.792 02:27:54 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.792 02:27:54 -- common/autotest_common.sh@1320 -- # shift 00:25:54.792 02:27:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:54.792 02:27:54 -- nvmf/common.sh@542 -- # cat 00:25:54.792 02:27:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:54.792 02:27:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:54.792 02:27:54 -- target/dif.sh@72 -- # (( file <= files )) 00:25:54.792 02:27:54 -- nvmf/common.sh@544 -- # jq . 00:25:54.792 02:27:54 -- nvmf/common.sh@545 -- # IFS=, 00:25:54.792 02:27:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:54.792 "params": { 00:25:54.792 "name": "Nvme0", 00:25:54.792 "trtype": "tcp", 00:25:54.792 "traddr": "10.0.0.2", 00:25:54.792 "adrfam": "ipv4", 00:25:54.792 "trsvcid": "4420", 00:25:54.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:54.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:54.792 "hdgst": true, 00:25:54.792 "ddgst": true 00:25:54.792 }, 00:25:54.792 "method": "bdev_nvme_attach_controller" 00:25:54.792 }' 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:54.792 02:27:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:54.792 02:27:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:54.792 02:27:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:54.792 02:27:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:54.792 02:27:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:54.792 02:27:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.792 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:54.792 ... 00:25:54.792 fio-3.35 00:25:54.792 Starting 3 threads 00:25:55.358 [2024-07-15 02:27:54.682300] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:55.358 [2024-07-15 02:27:54.682384] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.331 00:26:05.331 filename0: (groupid=0, jobs=1): err= 0: pid=101804: Mon Jul 15 02:28:04 2024 00:26:05.331 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10044msec) 00:26:05.331 slat (nsec): min=6514, max=70125, avg=15790.96, stdev=6852.98 00:26:05.331 clat (usec): min=4642, max=55667, avg=14275.55, stdev=2609.75 00:26:05.331 lat (usec): min=4666, max=55686, avg=14291.34, stdev=2611.17 00:26:05.331 clat percentiles (usec): 00:26:05.331 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[13173], 00:26:05.331 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:26:05.331 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:26:05.331 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21103], 99.95th=[44827], 00:26:05.331 | 99.99th=[55837] 00:26:05.331 bw ( KiB/s): min=24832, max=30087, per=30.67%, avg=26905.10, stdev=1735.77, samples=20 00:26:05.331 iops : min= 194, max= 235, avg=210.15, stdev=13.56, samples=20 00:26:05.331 lat (msec) : 10=9.45%, 20=90.21%, 50=0.29%, 100=0.05% 00:26:05.331 cpu : usr=94.12%, sys=4.35%, ctx=15, majf=0, minf=9 00:26:05.331 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.331 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.331 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.331 filename0: (groupid=0, jobs=1): err= 0: pid=101805: Mon Jul 15 02:28:04 2024 00:26:05.331 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(297MiB/10005msec) 00:26:05.331 slat (nsec): min=6869, max=85019, avg=19582.22, stdev=8438.59 00:26:05.331 clat (usec): min=5727, max=54082, avg=12628.57, stdev=7831.02 00:26:05.331 lat (usec): min=5749, max=54103, avg=12648.15, stdev=7831.35 00:26:05.331 clat percentiles (usec): 00:26:05.331 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:26:05.331 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:26:05.331 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13960], 00:26:05.331 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:26:05.331 | 99.99th=[54264] 00:26:05.331 bw ( KiB/s): min=24223, max=35697, per=34.55%, avg=30304.63, stdev=3471.84, samples=19 00:26:05.331 iops : min= 189, max= 278, avg=236.63, stdev=27.04, samples=19 00:26:05.331 lat (msec) : 10=12.02%, 20=84.19%, 50=0.13%, 100=3.67% 00:26:05.331 cpu : usr=94.81%, sys=3.80%, ctx=10, majf=0, minf=9 00:26:05.331 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.331 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.331 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.331 filename0: (groupid=0, jobs=1): err= 0: pid=101806: Mon Jul 15 02:28:04 2024 00:26:05.331 read: IOPS=240, BW=30.1MiB/s (31.5MB/s)(301MiB/10004msec) 00:26:05.331 slat (usec): min=6, max=112, avg=14.71, stdev= 8.26 00:26:05.331 clat (usec): min=6477, max=53528, avg=12453.37, stdev=2935.18 00:26:05.331 lat (usec): min=6490, max=53540, avg=12468.09, stdev=2936.15 00:26:05.331 clat percentiles (usec): 00:26:05.331 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[11207], 00:26:05.332 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:26:05.332 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15139], 00:26:05.332 | 99.00th=[16057], 99.50th=[16909], 99.90th=[52691], 99.95th=[53216], 00:26:05.332 | 99.99th=[53740] 00:26:05.332 bw ( KiB/s): min=26368, max=35442, per=35.13%, avg=30811.53, stdev=2564.09, samples=19 00:26:05.332 iops : min= 206, max= 276, avg=240.63, stdev=19.93, samples=19 00:26:05.332 lat (msec) : 10=17.04%, 20=82.71%, 50=0.04%, 100=0.21% 00:26:05.332 cpu : usr=94.17%, sys=4.18%, ctx=8, majf=0, minf=9 00:26:05.332 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.332 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.332 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:05.332 00:26:05.332 Run status group 0 (all jobs): 00:26:05.332 READ: bw=85.7MiB/s (89.8MB/s), 26.2MiB/s-30.1MiB/s (27.5MB/s-31.5MB/s), io=860MiB (902MB), run=10004-10044msec 00:26:05.589 02:28:05 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:05.589 02:28:05 -- target/dif.sh@43 -- # local sub 00:26:05.589 02:28:05 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.589 02:28:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.589 02:28:05 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.589 02:28:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.589 02:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.589 02:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.589 02:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.589 02:28:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.589 02:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.589 02:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.589 ************************************ 00:26:05.589 END TEST fio_dif_digest 00:26:05.589 ************************************ 00:26:05.589 02:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.589 00:26:05.589 real 0m11.007s 00:26:05.589 user 0m29.026s 00:26:05.589 sys 0m1.509s 00:26:05.589 02:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.589 02:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:05.589 02:28:05 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:05.589 02:28:05 -- target/dif.sh@147 -- # nvmftestfini 00:26:05.589 02:28:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:05.589 02:28:05 -- nvmf/common.sh@116 -- # sync 00:26:05.847 02:28:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:05.847 02:28:05 -- nvmf/common.sh@119 -- # set +e 00:26:05.847 02:28:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:05.847 02:28:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:05.847 rmmod nvme_tcp 00:26:05.847 rmmod nvme_fabrics 00:26:05.847 rmmod nvme_keyring 00:26:05.847 02:28:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:05.847 02:28:05 -- nvmf/common.sh@123 -- # set -e 00:26:05.847 02:28:05 -- nvmf/common.sh@124 -- # return 0 00:26:05.847 02:28:05 -- nvmf/common.sh@477 -- # '[' -n 101041 ']' 00:26:05.847 02:28:05 -- nvmf/common.sh@478 -- # killprocess 101041 00:26:05.847 02:28:05 -- common/autotest_common.sh@926 -- # '[' -z 101041 ']' 00:26:05.847 02:28:05 -- common/autotest_common.sh@930 -- # kill -0 101041 00:26:05.847 02:28:05 -- common/autotest_common.sh@931 -- # uname 00:26:05.847 02:28:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:05.847 02:28:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101041 00:26:05.847 killing process with pid 101041 00:26:05.847 02:28:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:05.847 02:28:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:05.847 02:28:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101041' 00:26:05.847 02:28:05 -- common/autotest_common.sh@945 -- # kill 101041 00:26:05.847 02:28:05 -- common/autotest_common.sh@950 -- # wait 101041 00:26:06.105 02:28:05 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:06.105 02:28:05 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:06.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:06.362 Waiting for block devices as requested 00:26:06.362 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:06.362 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:06.620 02:28:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:06.620 02:28:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:06.620 02:28:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.620 02:28:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:06.620 02:28:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.620 02:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:06.620 02:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.620 02:28:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:06.620 00:26:06.620 real 0m59.901s 00:26:06.620 user 3m52.778s 00:26:06.620 sys 0m13.659s 00:26:06.620 02:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.620 ************************************ 00:26:06.620 END TEST nvmf_dif 00:26:06.620 ************************************ 00:26:06.620 02:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:06.620 02:28:06 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:06.620 02:28:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.620 02:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.620 02:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:06.620 ************************************ 00:26:06.620 START TEST nvmf_abort_qd_sizes 00:26:06.620 ************************************ 00:26:06.620 02:28:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:06.620 * Looking for test storage... 00:26:06.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:06.620 02:28:06 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:06.620 02:28:06 -- nvmf/common.sh@7 -- # uname -s 00:26:06.620 02:28:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.620 02:28:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.620 02:28:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.620 02:28:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.620 02:28:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.620 02:28:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.620 02:28:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.620 02:28:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.620 02:28:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.620 02:28:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.620 02:28:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 00:26:06.620 02:28:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=97a9fd12-e411-46d9-8a8a-09652cab25c1 00:26:06.620 02:28:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.621 02:28:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.621 02:28:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:06.621 02:28:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.621 02:28:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.621 02:28:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.621 02:28:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.621 02:28:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.621 02:28:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.621 02:28:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.621 02:28:06 -- paths/export.sh@5 -- # export PATH 00:26:06.621 02:28:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.621 02:28:06 -- nvmf/common.sh@46 -- # : 0 00:26:06.621 02:28:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:06.621 02:28:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:06.621 02:28:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:06.621 02:28:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.621 02:28:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.621 02:28:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:06.621 02:28:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:06.621 02:28:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:06.621 02:28:06 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:06.621 02:28:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:06.621 02:28:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.621 02:28:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:06.621 02:28:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:06.621 02:28:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:06.621 02:28:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.621 02:28:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:06.621 02:28:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.621 02:28:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:06.621 02:28:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:06.621 02:28:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:06.621 02:28:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:06.621 02:28:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:06.621 02:28:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:06.621 02:28:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.621 02:28:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.621 02:28:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:06.621 02:28:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:06.621 02:28:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:06.621 02:28:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:06.621 02:28:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:06.621 02:28:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.621 02:28:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:06.621 02:28:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:06.621 02:28:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:06.621 02:28:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:06.621 02:28:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:06.621 02:28:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:06.880 Cannot find device "nvmf_tgt_br" 00:26:06.880 02:28:06 -- nvmf/common.sh@154 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:06.880 Cannot find device "nvmf_tgt_br2" 00:26:06.880 02:28:06 -- nvmf/common.sh@155 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:06.880 02:28:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:06.880 Cannot find device "nvmf_tgt_br" 00:26:06.880 02:28:06 -- nvmf/common.sh@157 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:06.880 Cannot find device "nvmf_tgt_br2" 00:26:06.880 02:28:06 -- nvmf/common.sh@158 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:06.880 02:28:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:06.880 02:28:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:06.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:06.880 02:28:06 -- nvmf/common.sh@161 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:06.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:06.880 02:28:06 -- nvmf/common.sh@162 -- # true 00:26:06.880 02:28:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:06.880 02:28:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:06.880 02:28:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:06.880 02:28:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:06.880 02:28:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:06.880 02:28:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:06.880 02:28:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:06.880 02:28:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:06.880 02:28:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:06.880 02:28:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:06.880 02:28:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:06.880 02:28:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:06.880 02:28:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:06.880 02:28:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:06.880 02:28:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:06.880 02:28:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:06.880 02:28:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:06.880 02:28:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:06.880 02:28:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:07.139 02:28:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:07.139 02:28:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:07.139 02:28:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:07.139 02:28:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:07.139 02:28:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:07.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:26:07.139 00:26:07.139 --- 10.0.0.2 ping statistics --- 00:26:07.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.139 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:07.139 02:28:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:07.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:07.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:26:07.139 00:26:07.139 --- 10.0.0.3 ping statistics --- 00:26:07.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.139 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:07.139 02:28:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:07.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:07.139 00:26:07.139 --- 10.0.0.1 ping statistics --- 00:26:07.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.139 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:07.139 02:28:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.139 02:28:06 -- nvmf/common.sh@421 -- # return 0 00:26:07.139 02:28:06 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:07.139 02:28:06 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:07.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:07.965 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:07.965 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:07.965 02:28:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.965 02:28:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:07.965 02:28:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:07.966 02:28:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.966 02:28:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:07.966 02:28:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:07.966 02:28:07 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:07.966 02:28:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:07.966 02:28:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:07.966 02:28:07 -- common/autotest_common.sh@10 -- # set +x 00:26:07.966 02:28:07 -- nvmf/common.sh@469 -- # nvmfpid=102400 00:26:07.966 02:28:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:07.966 02:28:07 -- nvmf/common.sh@470 -- # waitforlisten 102400 00:26:07.966 02:28:07 -- common/autotest_common.sh@819 -- # '[' -z 102400 ']' 00:26:07.966 02:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.966 02:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:07.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.966 02:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.966 02:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:07.966 02:28:07 -- common/autotest_common.sh@10 -- # set +x 00:26:07.966 [2024-07-15 02:28:07.495237] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:26:07.966 [2024-07-15 02:28:07.495329] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.225 [2024-07-15 02:28:07.635482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.225 [2024-07-15 02:28:07.738803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:08.225 [2024-07-15 02:28:07.739310] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.225 [2024-07-15 02:28:07.739494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.225 [2024-07-15 02:28:07.739703] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.225 [2024-07-15 02:28:07.739923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.225 [2024-07-15 02:28:07.740059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.225 [2024-07-15 02:28:07.740158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.225 [2024-07-15 02:28:07.740159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.161 02:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:09.161 02:28:08 -- common/autotest_common.sh@852 -- # return 0 00:26:09.161 02:28:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:09.161 02:28:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:09.161 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.161 02:28:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.161 02:28:08 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:09.161 02:28:08 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:09.161 02:28:08 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:09.161 02:28:08 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:09.161 02:28:08 -- scripts/common.sh@312 -- # local nvmes 00:26:09.162 02:28:08 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:09.162 02:28:08 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:09.162 02:28:08 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:09.162 02:28:08 -- scripts/common.sh@297 -- # local bdf= 00:26:09.162 02:28:08 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:09.162 02:28:08 -- scripts/common.sh@232 -- # local class 00:26:09.162 02:28:08 -- scripts/common.sh@233 -- # local subclass 00:26:09.162 02:28:08 -- scripts/common.sh@234 -- # local progif 00:26:09.162 02:28:08 -- scripts/common.sh@235 -- # printf %02x 1 00:26:09.162 02:28:08 -- scripts/common.sh@235 -- # class=01 00:26:09.162 02:28:08 -- scripts/common.sh@236 -- # printf %02x 8 00:26:09.162 02:28:08 -- scripts/common.sh@236 -- # subclass=08 00:26:09.162 02:28:08 -- scripts/common.sh@237 -- # printf %02x 2 00:26:09.162 02:28:08 -- scripts/common.sh@237 -- # progif=02 00:26:09.162 02:28:08 -- scripts/common.sh@239 -- # hash lspci 00:26:09.162 02:28:08 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:09.162 02:28:08 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:09.162 02:28:08 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:09.162 02:28:08 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:09.162 02:28:08 -- scripts/common.sh@244 -- # tr -d '"' 00:26:09.162 02:28:08 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:09.162 02:28:08 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:09.162 02:28:08 -- scripts/common.sh@15 -- # local i 00:26:09.162 02:28:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:09.162 02:28:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:09.162 02:28:08 -- scripts/common.sh@24 -- # return 0 00:26:09.162 02:28:08 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:09.162 02:28:08 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:09.162 02:28:08 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:09.162 02:28:08 -- scripts/common.sh@15 -- # local i 00:26:09.162 02:28:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:09.162 02:28:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:09.162 02:28:08 -- scripts/common.sh@24 -- # return 0 00:26:09.162 02:28:08 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:09.162 02:28:08 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:09.162 02:28:08 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:09.162 02:28:08 -- scripts/common.sh@322 -- # uname -s 00:26:09.162 02:28:08 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:09.162 02:28:08 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:09.162 02:28:08 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:09.162 02:28:08 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:09.162 02:28:08 -- scripts/common.sh@322 -- # uname -s 00:26:09.162 02:28:08 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:09.162 02:28:08 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:09.162 02:28:08 -- scripts/common.sh@327 -- # (( 2 )) 00:26:09.162 02:28:08 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:09.162 02:28:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:09.162 02:28:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 ************************************ 00:26:09.162 START TEST spdk_target_abort 00:26:09.162 ************************************ 00:26:09.162 02:28:08 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:09.162 02:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 spdk_targetn1 00:26:09.162 02:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.162 02:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 [2024-07-15 02:28:08.624864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.162 02:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:09.162 02:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 02:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:09.162 02:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 02:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:09.162 02:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:09.162 02:28:08 -- common/autotest_common.sh@10 -- # set +x 00:26:09.162 [2024-07-15 02:28:08.653061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.162 02:28:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:09.162 02:28:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:12.444 Initializing NVMe Controllers 00:26:12.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:12.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:12.444 Initialization complete. Launching workers. 00:26:12.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10150, failed: 0 00:26:12.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1105, failed to submit 9045 00:26:12.444 success 820, unsuccess 285, failed 0 00:26:12.444 02:28:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:12.444 02:28:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:15.730 Initializing NVMe Controllers 00:26:15.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:15.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:15.730 Initialization complete. Launching workers. 00:26:15.730 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5943, failed: 0 00:26:15.730 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1277, failed to submit 4666 00:26:15.730 success 249, unsuccess 1028, failed 0 00:26:15.730 02:28:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:15.730 02:28:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:19.018 Initializing NVMe Controllers 00:26:19.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:19.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:19.018 Initialization complete. Launching workers. 00:26:19.018 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30278, failed: 0 00:26:19.018 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2588, failed to submit 27690 00:26:19.018 success 459, unsuccess 2129, failed 0 00:26:19.018 02:28:18 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:19.018 02:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.018 02:28:18 -- common/autotest_common.sh@10 -- # set +x 00:26:19.018 02:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.019 02:28:18 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:19.019 02:28:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.019 02:28:18 -- common/autotest_common.sh@10 -- # set +x 00:26:19.586 02:28:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.586 02:28:18 -- target/abort_qd_sizes.sh@62 -- # killprocess 102400 00:26:19.586 02:28:18 -- common/autotest_common.sh@926 -- # '[' -z 102400 ']' 00:26:19.586 02:28:18 -- common/autotest_common.sh@930 -- # kill -0 102400 00:26:19.586 02:28:18 -- common/autotest_common.sh@931 -- # uname 00:26:19.586 02:28:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:19.586 02:28:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102400 00:26:19.586 killing process with pid 102400 00:26:19.586 02:28:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:19.586 02:28:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:19.586 02:28:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102400' 00:26:19.586 02:28:18 -- common/autotest_common.sh@945 -- # kill 102400 00:26:19.586 02:28:18 -- common/autotest_common.sh@950 -- # wait 102400 00:26:19.845 ************************************ 00:26:19.845 END TEST spdk_target_abort 00:26:19.845 ************************************ 00:26:19.845 00:26:19.845 real 0m10.627s 00:26:19.845 user 0m43.405s 00:26:19.845 sys 0m1.686s 00:26:19.845 02:28:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.845 02:28:19 -- common/autotest_common.sh@10 -- # set +x 00:26:19.845 02:28:19 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:19.845 02:28:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.845 02:28:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.845 02:28:19 -- common/autotest_common.sh@10 -- # set +x 00:26:19.845 ************************************ 00:26:19.845 START TEST kernel_target_abort 00:26:19.845 ************************************ 00:26:19.845 02:28:19 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:19.845 02:28:19 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:19.845 02:28:19 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:19.845 02:28:19 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:19.845 02:28:19 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:19.845 02:28:19 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:19.845 02:28:19 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:19.845 02:28:19 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:19.845 02:28:19 -- nvmf/common.sh@627 -- # local block nvme 00:26:19.845 02:28:19 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:19.845 02:28:19 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:19.845 02:28:19 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:19.845 02:28:19 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:20.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:20.104 Waiting for block devices as requested 00:26:20.104 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:20.363 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:20.363 02:28:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:20.363 02:28:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:20.363 02:28:19 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:20.363 02:28:19 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:20.363 02:28:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:20.363 No valid GPT data, bailing 00:26:20.363 02:28:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:20.363 02:28:19 -- scripts/common.sh@393 -- # pt= 00:26:20.363 02:28:19 -- scripts/common.sh@394 -- # return 1 00:26:20.363 02:28:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:20.363 02:28:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:20.363 02:28:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:20.363 02:28:19 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:20.363 02:28:19 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:20.363 02:28:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:20.622 No valid GPT data, bailing 00:26:20.622 02:28:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:20.622 02:28:19 -- scripts/common.sh@393 -- # pt= 00:26:20.622 02:28:19 -- scripts/common.sh@394 -- # return 1 00:26:20.622 02:28:19 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:20.622 02:28:19 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:20.622 02:28:19 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:20.622 02:28:19 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:20.622 02:28:19 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:20.622 02:28:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:20.622 No valid GPT data, bailing 00:26:20.622 02:28:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:20.622 02:28:20 -- scripts/common.sh@393 -- # pt= 00:26:20.622 02:28:20 -- scripts/common.sh@394 -- # return 1 00:26:20.622 02:28:20 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:20.622 02:28:20 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:20.622 02:28:20 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:20.622 02:28:20 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:20.622 02:28:20 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:20.622 02:28:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:20.622 No valid GPT data, bailing 00:26:20.622 02:28:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:20.622 02:28:20 -- scripts/common.sh@393 -- # pt= 00:26:20.622 02:28:20 -- scripts/common.sh@394 -- # return 1 00:26:20.622 02:28:20 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:20.622 02:28:20 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:20.622 02:28:20 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:20.622 02:28:20 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:20.622 02:28:20 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:20.622 02:28:20 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:20.622 02:28:20 -- nvmf/common.sh@654 -- # echo 1 00:26:20.622 02:28:20 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:20.622 02:28:20 -- nvmf/common.sh@656 -- # echo 1 00:26:20.622 02:28:20 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:20.622 02:28:20 -- nvmf/common.sh@663 -- # echo tcp 00:26:20.622 02:28:20 -- nvmf/common.sh@664 -- # echo 4420 00:26:20.622 02:28:20 -- nvmf/common.sh@665 -- # echo ipv4 00:26:20.622 02:28:20 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:20.622 02:28:20 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:97a9fd12-e411-46d9-8a8a-09652cab25c1 --hostid=97a9fd12-e411-46d9-8a8a-09652cab25c1 -a 10.0.0.1 -t tcp -s 4420 00:26:20.622 00:26:20.622 Discovery Log Number of Records 2, Generation counter 2 00:26:20.622 =====Discovery Log Entry 0====== 00:26:20.622 trtype: tcp 00:26:20.622 adrfam: ipv4 00:26:20.622 subtype: current discovery subsystem 00:26:20.622 treq: not specified, sq flow control disable supported 00:26:20.622 portid: 1 00:26:20.622 trsvcid: 4420 00:26:20.622 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:20.622 traddr: 10.0.0.1 00:26:20.622 eflags: none 00:26:20.622 sectype: none 00:26:20.622 =====Discovery Log Entry 1====== 00:26:20.622 trtype: tcp 00:26:20.622 adrfam: ipv4 00:26:20.622 subtype: nvme subsystem 00:26:20.622 treq: not specified, sq flow control disable supported 00:26:20.622 portid: 1 00:26:20.622 trsvcid: 4420 00:26:20.622 subnqn: kernel_target 00:26:20.622 traddr: 10.0.0.1 00:26:20.622 eflags: none 00:26:20.622 sectype: none 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:20.622 02:28:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:20.623 02:28:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:23.957 Initializing NVMe Controllers 00:26:23.957 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:23.957 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:23.957 Initialization complete. Launching workers. 00:26:23.957 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29898, failed: 0 00:26:23.957 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29898, failed to submit 0 00:26:23.957 success 0, unsuccess 29898, failed 0 00:26:23.957 02:28:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:23.957 02:28:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:27.243 Initializing NVMe Controllers 00:26:27.243 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:27.243 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:27.243 Initialization complete. Launching workers. 00:26:27.243 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64806, failed: 0 00:26:27.243 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26152, failed to submit 38654 00:26:27.243 success 0, unsuccess 26152, failed 0 00:26:27.243 02:28:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:27.243 02:28:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:30.527 Initializing NVMe Controllers 00:26:30.527 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:30.527 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:30.527 Initialization complete. Launching workers. 00:26:30.527 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 72794, failed: 0 00:26:30.527 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18170, failed to submit 54624 00:26:30.527 success 0, unsuccess 18170, failed 0 00:26:30.527 02:28:29 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:30.527 02:28:29 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:30.527 02:28:29 -- nvmf/common.sh@677 -- # echo 0 00:26:30.527 02:28:29 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:30.527 02:28:29 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:30.527 02:28:29 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:30.527 02:28:29 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:30.527 02:28:29 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:30.527 02:28:29 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:30.527 ************************************ 00:26:30.527 END TEST kernel_target_abort 00:26:30.527 ************************************ 00:26:30.527 00:26:30.527 real 0m10.525s 00:26:30.527 user 0m5.112s 00:26:30.527 sys 0m2.548s 00:26:30.527 02:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.527 02:28:29 -- common/autotest_common.sh@10 -- # set +x 00:26:30.527 02:28:29 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:30.527 02:28:29 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:30.527 02:28:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:30.527 02:28:29 -- nvmf/common.sh@116 -- # sync 00:26:30.527 02:28:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:30.527 02:28:29 -- nvmf/common.sh@119 -- # set +e 00:26:30.527 02:28:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:30.527 02:28:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:30.527 rmmod nvme_tcp 00:26:30.527 rmmod nvme_fabrics 00:26:30.527 rmmod nvme_keyring 00:26:30.527 02:28:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:30.527 02:28:29 -- nvmf/common.sh@123 -- # set -e 00:26:30.527 02:28:29 -- nvmf/common.sh@124 -- # return 0 00:26:30.527 02:28:29 -- nvmf/common.sh@477 -- # '[' -n 102400 ']' 00:26:30.527 02:28:29 -- nvmf/common.sh@478 -- # killprocess 102400 00:26:30.527 Process with pid 102400 is not found 00:26:30.527 02:28:29 -- common/autotest_common.sh@926 -- # '[' -z 102400 ']' 00:26:30.527 02:28:29 -- common/autotest_common.sh@930 -- # kill -0 102400 00:26:30.527 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102400) - No such process 00:26:30.527 02:28:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102400 is not found' 00:26:30.527 02:28:29 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:30.527 02:28:29 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:31.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:31.093 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:31.093 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:31.093 02:28:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:31.093 02:28:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:31.093 02:28:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:31.093 02:28:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:31.093 02:28:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.094 02:28:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:31.094 02:28:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.094 02:28:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:31.094 ************************************ 00:26:31.094 END TEST nvmf_abort_qd_sizes 00:26:31.094 ************************************ 00:26:31.094 00:26:31.094 real 0m24.588s 00:26:31.094 user 0m49.856s 00:26:31.094 sys 0m5.551s 00:26:31.094 02:28:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.094 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:26:31.352 02:28:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:31.352 02:28:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:31.352 02:28:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:31.352 02:28:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:31.352 02:28:30 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:31.352 02:28:30 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:31.352 02:28:30 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:31.352 02:28:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:31.352 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:26:31.352 02:28:30 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:31.352 02:28:30 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:31.352 02:28:30 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:31.352 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:26:33.255 INFO: APP EXITING 00:26:33.256 INFO: killing all VMs 00:26:33.256 INFO: killing vhost app 00:26:33.256 INFO: EXIT DONE 00:26:33.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:33.514 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:33.514 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:34.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:34.455 Cleaning 00:26:34.455 Removing: /var/run/dpdk/spdk0/config 00:26:34.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:34.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:34.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:34.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:34.455 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:34.455 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:34.455 Removing: /var/run/dpdk/spdk1/config 00:26:34.455 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:34.455 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:34.455 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:34.455 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:34.455 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:34.455 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:34.455 Removing: /var/run/dpdk/spdk2/config 00:26:34.455 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:34.455 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:34.455 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:34.455 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:34.455 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:34.455 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:34.455 Removing: /var/run/dpdk/spdk3/config 00:26:34.455 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:34.455 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:34.455 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:34.455 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:34.455 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:34.455 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:34.455 Removing: /var/run/dpdk/spdk4/config 00:26:34.455 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:34.455 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:34.455 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:34.455 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:34.455 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:34.455 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:34.455 Removing: /dev/shm/nvmf_trace.0 00:26:34.455 Removing: /dev/shm/spdk_tgt_trace.pid67170 00:26:34.455 Removing: /var/run/dpdk/spdk0 00:26:34.455 Removing: /var/run/dpdk/spdk1 00:26:34.455 Removing: /var/run/dpdk/spdk2 00:26:34.455 Removing: /var/run/dpdk/spdk3 00:26:34.455 Removing: /var/run/dpdk/spdk4 00:26:34.455 Removing: /var/run/dpdk/spdk_pid100190 00:26:34.455 Removing: /var/run/dpdk/spdk_pid100743 00:26:34.455 Removing: /var/run/dpdk/spdk_pid100748 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101116 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101275 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101434 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101531 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101687 00:26:34.455 Removing: /var/run/dpdk/spdk_pid101796 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102469 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102499 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102540 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102783 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102813 00:26:34.455 Removing: /var/run/dpdk/spdk_pid102854 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67026 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67170 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67476 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67745 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67920 00:26:34.455 Removing: /var/run/dpdk/spdk_pid67990 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68081 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68175 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68208 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68238 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68304 00:26:34.455 Removing: /var/run/dpdk/spdk_pid68422 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69044 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69104 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69173 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69201 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69280 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69308 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69387 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69415 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69461 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69491 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69537 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69567 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69718 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69748 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69822 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69891 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69910 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69976 00:26:34.455 Removing: /var/run/dpdk/spdk_pid69996 00:26:34.455 Removing: /var/run/dpdk/spdk_pid70030 00:26:34.455 Removing: /var/run/dpdk/spdk_pid70044 00:26:34.455 Removing: /var/run/dpdk/spdk_pid70083 00:26:34.455 Removing: /var/run/dpdk/spdk_pid70098 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70133 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70152 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70182 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70201 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70237 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70257 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70291 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70311 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70340 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70359 00:26:34.712 Removing: /var/run/dpdk/spdk_pid70395 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70414 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70449 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70463 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70503 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70517 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70551 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70571 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70600 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70625 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70654 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70680 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70710 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70730 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70764 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70784 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70818 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70841 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70878 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70901 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70933 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70952 00:26:34.713 Removing: /var/run/dpdk/spdk_pid70987 00:26:34.713 Removing: /var/run/dpdk/spdk_pid71006 00:26:34.713 Removing: /var/run/dpdk/spdk_pid71042 00:26:34.713 Removing: /var/run/dpdk/spdk_pid71111 00:26:34.713 Removing: /var/run/dpdk/spdk_pid71214 00:26:34.713 Removing: /var/run/dpdk/spdk_pid71615 00:26:34.713 Removing: /var/run/dpdk/spdk_pid78327 00:26:34.713 Removing: /var/run/dpdk/spdk_pid78671 00:26:34.713 Removing: /var/run/dpdk/spdk_pid81084 00:26:34.713 Removing: /var/run/dpdk/spdk_pid81457 00:26:34.713 Removing: /var/run/dpdk/spdk_pid81714 00:26:34.713 Removing: /var/run/dpdk/spdk_pid81761 00:26:34.713 Removing: /var/run/dpdk/spdk_pid82065 00:26:34.713 Removing: /var/run/dpdk/spdk_pid82111 00:26:34.713 Removing: /var/run/dpdk/spdk_pid82491 00:26:34.713 Removing: /var/run/dpdk/spdk_pid83014 00:26:34.713 Removing: /var/run/dpdk/spdk_pid83456 00:26:34.713 Removing: /var/run/dpdk/spdk_pid84405 00:26:34.713 Removing: /var/run/dpdk/spdk_pid85382 00:26:34.713 Removing: /var/run/dpdk/spdk_pid85500 00:26:34.713 Removing: /var/run/dpdk/spdk_pid85562 00:26:34.713 Removing: /var/run/dpdk/spdk_pid87025 00:26:34.713 Removing: /var/run/dpdk/spdk_pid87258 00:26:34.713 Removing: /var/run/dpdk/spdk_pid87699 00:26:34.713 Removing: /var/run/dpdk/spdk_pid87810 00:26:34.713 Removing: /var/run/dpdk/spdk_pid87959 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88005 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88050 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88096 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88261 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88409 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88673 00:26:34.713 Removing: /var/run/dpdk/spdk_pid88789 00:26:34.713 Removing: /var/run/dpdk/spdk_pid89211 00:26:34.713 Removing: /var/run/dpdk/spdk_pid89582 00:26:34.713 Removing: /var/run/dpdk/spdk_pid89584 00:26:34.713 Removing: /var/run/dpdk/spdk_pid91812 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92110 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92603 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92610 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92946 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92966 00:26:34.713 Removing: /var/run/dpdk/spdk_pid92980 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93011 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93017 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93161 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93163 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93271 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93279 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93387 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93389 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93862 00:26:34.713 Removing: /var/run/dpdk/spdk_pid93905 00:26:34.713 Removing: /var/run/dpdk/spdk_pid94058 00:26:34.713 Removing: /var/run/dpdk/spdk_pid94179 00:26:34.713 Removing: /var/run/dpdk/spdk_pid94574 00:26:34.971 Removing: /var/run/dpdk/spdk_pid94825 00:26:34.971 Removing: /var/run/dpdk/spdk_pid95307 00:26:34.971 Removing: /var/run/dpdk/spdk_pid95856 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96307 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96399 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96488 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96574 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96731 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96822 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96913 00:26:34.971 Removing: /var/run/dpdk/spdk_pid96998 00:26:34.971 Removing: /var/run/dpdk/spdk_pid97348 00:26:34.971 Removing: /var/run/dpdk/spdk_pid98051 00:26:34.971 Removing: /var/run/dpdk/spdk_pid99399 00:26:34.971 Removing: /var/run/dpdk/spdk_pid99605 00:26:34.971 Removing: /var/run/dpdk/spdk_pid99890 00:26:34.971 Clean 00:26:34.971 killing process with pid 61395 00:26:34.971 killing process with pid 61399 00:26:34.971 02:28:34 -- common/autotest_common.sh@1436 -- # return 0 00:26:34.971 02:28:34 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:34.971 02:28:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.971 02:28:34 -- common/autotest_common.sh@10 -- # set +x 00:26:34.971 02:28:34 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:34.971 02:28:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:34.971 02:28:34 -- common/autotest_common.sh@10 -- # set +x 00:26:34.971 02:28:34 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:34.971 02:28:34 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:34.971 02:28:34 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:34.971 02:28:34 -- spdk/autotest.sh@394 -- # hash lcov 00:26:34.971 02:28:34 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:34.971 02:28:34 -- spdk/autotest.sh@396 -- # hostname 00:26:34.971 02:28:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:35.229 geninfo: WARNING: invalid characters removed from testname! 00:27:01.805 02:28:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:01.805 02:29:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:03.183 02:29:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:05.750 02:29:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:07.648 02:29:06 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:10.176 02:29:09 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:12.121 02:29:11 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:12.121 02:29:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:12.121 02:29:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:12.121 02:29:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.121 02:29:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.121 02:29:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.121 02:29:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.121 02:29:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.121 02:29:11 -- paths/export.sh@5 -- $ export PATH 00:27:12.121 02:29:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.121 02:29:11 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:12.121 02:29:11 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:12.121 02:29:11 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721010551.XXXXXX 00:27:12.121 02:29:11 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721010551.toKe2e 00:27:12.121 02:29:11 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:12.121 02:29:11 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:27:12.121 02:29:11 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:12.121 02:29:11 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:12.121 02:29:11 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:12.121 02:29:11 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:12.121 02:29:11 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:12.121 02:29:11 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:12.121 02:29:11 -- common/autotest_common.sh@10 -- $ set +x 00:27:12.121 02:29:11 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:12.121 02:29:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:12.121 02:29:11 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:12.121 02:29:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:12.121 02:29:11 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:12.121 02:29:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:12.121 02:29:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:12.121 02:29:11 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:12.121 02:29:11 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:12.121 02:29:11 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:12.121 02:29:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:12.121 + [[ -n 5878 ]] 00:27:12.121 + sudo kill 5878 00:27:12.131 [Pipeline] } 00:27:12.149 [Pipeline] // timeout 00:27:12.154 [Pipeline] } 00:27:12.171 [Pipeline] // stage 00:27:12.176 [Pipeline] } 00:27:12.192 [Pipeline] // catchError 00:27:12.201 [Pipeline] stage 00:27:12.203 [Pipeline] { (Stop VM) 00:27:12.217 [Pipeline] sh 00:27:12.494 + vagrant halt 00:27:15.777 ==> default: Halting domain... 00:27:22.383 [Pipeline] sh 00:27:22.663 + vagrant destroy -f 00:27:25.945 ==> default: Removing domain... 00:27:25.957 [Pipeline] sh 00:27:26.234 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:26.244 [Pipeline] } 00:27:26.263 [Pipeline] // stage 00:27:26.267 [Pipeline] } 00:27:26.284 [Pipeline] // dir 00:27:26.289 [Pipeline] } 00:27:26.306 [Pipeline] // wrap 00:27:26.311 [Pipeline] } 00:27:26.325 [Pipeline] // catchError 00:27:26.334 [Pipeline] stage 00:27:26.336 [Pipeline] { (Epilogue) 00:27:26.350 [Pipeline] sh 00:27:26.632 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:31.914 [Pipeline] catchError 00:27:31.916 [Pipeline] { 00:27:31.929 [Pipeline] sh 00:27:32.207 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:32.465 Artifacts sizes are good 00:27:32.474 [Pipeline] } 00:27:32.492 [Pipeline] // catchError 00:27:32.504 [Pipeline] archiveArtifacts 00:27:32.510 Archiving artifacts 00:27:32.671 [Pipeline] cleanWs 00:27:32.680 [WS-CLEANUP] Deleting project workspace... 00:27:32.680 [WS-CLEANUP] Deferred wipeout is used... 00:27:32.686 [WS-CLEANUP] done 00:27:32.687 [Pipeline] } 00:27:32.704 [Pipeline] // stage 00:27:32.709 [Pipeline] } 00:27:32.725 [Pipeline] // node 00:27:32.730 [Pipeline] End of Pipeline 00:27:32.765 Finished: SUCCESS